Jan 21 06:54:17 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 06:54:17 crc restorecon[4810]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:17 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:54:18 crc restorecon[4810]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 06:54:19 crc kubenswrapper[4893]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.397242 4893 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400403 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400420 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400428 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400432 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400436 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400440 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400444 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400448 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400452 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400457 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400461 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400465 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400469 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400473 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400477 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400481 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400486 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400490 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400495 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400500 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400505 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400509 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400513 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400517 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400522 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400526 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400531 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400535 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400539 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400544 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400548 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400554 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400558 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400563 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400567 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400570 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400574 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400577 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400581 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400585 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400588 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400592 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400596 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400600 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400604 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400608 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400612 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400616 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400620 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400624 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400627 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400631 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400635 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400638 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400642 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400645 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400649 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400652 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400656 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400659 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400662 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400681 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400685 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400689 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400692 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400695 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400699 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400703 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400708 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400713 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.400718 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401016 4893 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401026 4893 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401035 4893 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401040 4893 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401045 4893 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401050 4893 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401057 4893 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401064 4893 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401068 4893 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401072 4893 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401076 4893 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401081 4893 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401085 4893 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401089 4893 flags.go:64] FLAG: --cgroup-root="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401093 4893 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401097 4893 flags.go:64] FLAG: --client-ca-file="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401101 4893 flags.go:64] FLAG: --cloud-config="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401105 4893 flags.go:64] FLAG: --cloud-provider="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401109 4893 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401113 4893 flags.go:64] FLAG: --cluster-domain="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401118 4893 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401123 4893 flags.go:64] FLAG: --config-dir="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401127 4893 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401131 4893 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401137 4893 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401141 4893 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401146 4893 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401151 4893 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401155 4893 flags.go:64] FLAG: --contention-profiling="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401159 4893 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401163 4893 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401168 4893 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401173 4893 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401178 4893 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401182 4893 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401186 4893 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401190 4893 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401195 4893 flags.go:64] FLAG: --enable-server="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401199 4893 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401206 4893 flags.go:64] FLAG: --event-burst="100" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401211 4893 flags.go:64] FLAG: --event-qps="50" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401215 4893 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401219 4893 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401223 4893 flags.go:64] FLAG: --eviction-hard="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401228 4893 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401232 4893 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401236 4893 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401241 4893 flags.go:64] FLAG: --eviction-soft="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401245 4893 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401249 4893 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401253 4893 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401257 4893 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401261 4893 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401265 4893 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401269 4893 flags.go:64] FLAG: --feature-gates="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401274 4893 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401278 4893 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401282 4893 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401287 4893 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401291 4893 flags.go:64] FLAG: --healthz-port="10248" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401295 4893 flags.go:64] FLAG: --help="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401299 4893 flags.go:64] FLAG: --hostname-override="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401303 4893 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401307 4893 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401311 4893 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401315 4893 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401319 4893 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401323 4893 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401328 4893 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401332 4893 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401336 4893 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401341 4893 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401345 4893 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401349 4893 flags.go:64] FLAG: --kube-reserved="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401353 4893 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401357 4893 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401361 4893 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401365 4893 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401369 4893 flags.go:64] FLAG: --lock-file="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401373 4893 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401377 4893 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401381 4893 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401387 4893 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401391 4893 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401395 4893 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401399 4893 flags.go:64] FLAG: --logging-format="text" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401403 4893 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401407 4893 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401411 4893 flags.go:64] FLAG: --manifest-url="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401415 4893 flags.go:64] FLAG: --manifest-url-header="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401420 4893 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401424 4893 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401429 4893 flags.go:64] FLAG: --max-pods="110" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401434 4893 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401438 4893 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401442 4893 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401447 4893 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401451 4893 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401455 4893 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401459 4893 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401468 4893 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401473 4893 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401477 4893 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401481 4893 flags.go:64] FLAG: --pod-cidr="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401486 4893 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401492 4893 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401496 4893 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401500 4893 flags.go:64] FLAG: --pods-per-core="0" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401504 4893 flags.go:64] FLAG: --port="10250" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401508 4893 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401512 4893 flags.go:64] FLAG: --provider-id="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401516 4893 flags.go:64] FLAG: --qos-reserved="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401520 4893 flags.go:64] FLAG: --read-only-port="10255" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401524 4893 flags.go:64] FLAG: --register-node="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401528 4893 flags.go:64] FLAG: --register-schedulable="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401532 4893 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401543 4893 flags.go:64] FLAG: --registry-burst="10" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401547 4893 flags.go:64] FLAG: --registry-qps="5" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401551 4893 flags.go:64] FLAG: --reserved-cpus="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401554 4893 flags.go:64] FLAG: --reserved-memory="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401560 4893 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401564 4893 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401568 4893 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401572 4893 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401576 4893 flags.go:64] FLAG: --runonce="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401580 4893 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401584 4893 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401588 4893 flags.go:64] FLAG: --seccomp-default="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401592 4893 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401596 4893 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401600 4893 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401604 4893 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401609 4893 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401613 4893 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401617 4893 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401621 4893 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401625 4893 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401630 4893 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401634 4893 flags.go:64] FLAG: --system-cgroups="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401638 4893 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401645 4893 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401649 4893 flags.go:64] FLAG: --tls-cert-file="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401653 4893 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401658 4893 flags.go:64] FLAG: --tls-min-version="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401662 4893 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401681 4893 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401686 4893 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401690 4893 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401694 4893 flags.go:64] FLAG: --v="2" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401700 4893 flags.go:64] FLAG: --version="false" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401705 4893 flags.go:64] FLAG: --vmodule="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401710 4893 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.401717 4893 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401824 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401829 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401833 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401837 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401841 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401845 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401848 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401852 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401856 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401860 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401864 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401869 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401874 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401878 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401882 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401887 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401891 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401895 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401898 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401902 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401905 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401909 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401912 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401916 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401920 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401924 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401927 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401930 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401934 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401937 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401941 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401946 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401949 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401953 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401956 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401960 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401963 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401967 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401970 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401974 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401977 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401982 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401986 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401990 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401994 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.401997 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402001 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402005 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402009 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402012 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402016 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402020 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402024 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402028 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402031 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402035 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402039 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402042 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402045 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402050 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402053 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402057 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402061 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402066 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402069 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402073 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402077 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402082 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402086 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402090 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.402093 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.402104 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.416335 4893 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.416376 4893 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416471 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416483 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416489 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416494 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416498 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416503 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416508 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416513 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416519 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416525 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416530 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416534 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416539 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416545 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416553 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416559 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416563 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416568 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416573 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416578 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416584 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416589 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416594 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416599 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416605 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416609 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416614 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416618 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416622 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416627 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416631 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416635 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416639 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416644 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416650 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416656 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416660 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416684 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416689 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416694 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416700 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416704 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416709 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416713 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416717 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416721 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416727 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416731 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416736 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416740 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416744 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416748 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416753 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416757 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416762 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416767 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416771 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416775 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416779 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416784 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416788 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416793 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416797 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416801 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416806 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416810 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416815 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416819 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416823 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416828 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416832 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.416840 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416987 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.416997 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417000 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417004 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417008 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417012 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417015 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417020 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417024 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417028 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417032 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417035 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417039 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417042 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417046 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417049 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417053 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417056 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417060 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417065 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417068 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417072 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417075 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417078 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417082 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417086 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417089 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417093 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417097 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417100 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417104 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417108 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417112 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417115 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417119 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417122 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417126 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417130 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417134 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417138 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417141 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417146 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417150 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417155 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417159 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417163 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417168 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417173 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417177 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417182 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417187 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417192 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417197 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417202 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417207 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417211 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417217 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417221 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417226 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417230 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417234 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417237 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417241 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417245 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417248 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417253 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417257 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417261 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417265 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417268 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.417272 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.417278 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.417443 4893 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.420010 4893 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.420102 4893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.420642 4893 server.go:997] "Starting client certificate rotation" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.420683 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.420975 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 17:17:32.493726746 +0000 UTC Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.421196 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.428869 4893 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.430496 4893 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.430609 4893 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.437432 4893 log.go:25] "Validated CRI v1 runtime API" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.469925 4893 log.go:25] "Validated CRI v1 image API" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.471499 4893 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.474898 4893 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-06-45-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.474980 4893 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.493855 4893 manager.go:217] Machine: {Timestamp:2026-01-21 06:54:19.492281093 +0000 UTC m=+0.722627035 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d58a57b5-ddc5-4868-b863-d910bc33033d BootID:15608b71-024b-43f0-a54d-3ca7890a281b Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7d:f0:4f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7d:f0:4f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d3:75:d3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:6e:b6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b0:43:29 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e9:f5:91 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:62:6a:62 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:eb:a3:93 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:74:1b:d4:c5:ce Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:ef:71:b6:e0:e5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.494179 4893 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.494391 4893 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499057 4893 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499358 4893 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499395 4893 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499662 4893 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499788 4893 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499939 4893 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.499992 4893 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.500395 4893 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.500535 4893 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.501339 4893 kubelet.go:418] "Attempting to sync node with API server" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.501365 4893 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.501477 4893 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.501500 4893 kubelet.go:324] "Adding apiserver pod source" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.501531 4893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.503297 4893 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.503403 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.503483 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.503642 4893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.503808 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.503866 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.509517 4893 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510349 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510382 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510395 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510407 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510423 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510433 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510441 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510458 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510470 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510500 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510525 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510533 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.510721 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.511214 4893 server.go:1280] "Started kubelet" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.511542 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.511592 4893 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.511719 4893 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 06:54:19 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.513300 4893 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.512947 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cac89567b6b62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:54:19.511188322 +0000 UTC m=+0.741534214,LastTimestamp:2026-01-21 06:54:19.511188322 +0000 UTC m=+0.741534214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.513584 4893 server.go:460] "Adding debug handlers to kubelet server" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.513840 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.513889 4893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.514151 4893 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.514173 4893 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.514281 4893 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.514519 4893 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.514816 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.514886 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.513943 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:13:20.622614828 +0000 UTC Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.515594 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.516850 4893 factory.go:55] Registering systemd factory Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.516915 4893 factory.go:221] Registration of the systemd container factory successfully Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.517241 4893 factory.go:153] Registering CRI-O factory Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.517263 4893 factory.go:221] Registration of the crio container factory successfully Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.517369 4893 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.517777 4893 factory.go:103] Registering Raw factory Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.517818 4893 manager.go:1196] Started watching for new ooms in manager Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.531203 4893 manager.go:319] Starting recovery of all containers Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537707 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537787 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537804 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537819 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537833 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537846 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537863 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537878 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537897 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537912 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537925 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537938 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537954 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537971 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537984 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.537999 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538013 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538027 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538040 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538054 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538069 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538084 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538098 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538114 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538128 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538142 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538190 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538207 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538223 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538238 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538253 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538275 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538289 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538388 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538406 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538448 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538461 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538474 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538527 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538544 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538558 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538571 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538621 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538636 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538653 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538690 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538706 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538719 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538735 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538749 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538763 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538779 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538800 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538818 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538833 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538847 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538863 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538878 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538894 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538909 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538928 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538943 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538957 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538970 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538985 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.538999 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539012 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539027 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539040 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539054 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539068 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539082 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539095 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539109 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539123 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539135 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539152 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539166 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539180 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539194 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539208 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539224 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539238 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539252 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539265 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539278 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539290 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539303 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539316 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539328 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539342 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539357 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539373 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539386 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539400 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539414 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539429 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539445 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539459 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539473 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539488 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539503 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539520 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539542 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539566 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539585 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539603 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539621 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539640 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539656 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539693 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539719 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539735 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539754 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539771 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539786 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539801 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539817 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539831 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539847 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539862 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539878 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539896 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539911 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539936 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539953 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539966 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539981 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.539993 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540007 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540022 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540036 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540055 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540069 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540084 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540098 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540113 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540127 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540141 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540154 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540171 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540185 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540199 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540213 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540228 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540241 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540255 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540269 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540286 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540300 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540343 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540359 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540372 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540386 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540399 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540412 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540429 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540443 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540456 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540470 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540482 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540496 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540510 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540525 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540543 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540559 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540575 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540587 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540601 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540616 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540630 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540644 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540659 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540692 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540708 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540722 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540736 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540749 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540765 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540779 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540793 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540809 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540822 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540858 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540873 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540894 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.540909 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542563 4893 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542619 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542637 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542653 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542688 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542703 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542718 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542732 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542746 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542765 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542781 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542798 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542811 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542828 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542842 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542856 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542869 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542883 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542899 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542913 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542926 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542946 4893 reconstruct.go:97] "Volume reconstruction finished" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.542956 4893 reconciler.go:26] "Reconciler: start to sync state" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.549647 4893 manager.go:324] Recovery completed Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.559699 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.561166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.561233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.561243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.562144 4893 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.562162 4893 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.562180 4893 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.564965 4893 policy_none.go:49] "None policy: Start" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.565789 4893 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.565826 4893 state_mem.go:35] "Initializing new in-memory state store" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.574215 4893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.579473 4893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.579583 4893 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.579615 4893 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.579703 4893 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 06:54:19 crc kubenswrapper[4893]: W0121 06:54:19.580272 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.580312 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.615203 4893 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631035 4893 manager.go:334] "Starting Device Plugin manager" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631271 4893 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631292 4893 server.go:79] "Starting device plugin registration server" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631727 4893 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631747 4893 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.631908 4893 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.632803 4893 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.632822 4893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.640817 4893 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.680557 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.680756 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.682478 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.682534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.682546 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.682838 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.683197 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.683286 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.687850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.687895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.687906 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.687948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.688038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.688069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.688319 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.688895 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.688935 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690740 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.690867 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.691029 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.691077 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.692100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.692227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.692247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697332 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697610 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.697740 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699279 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699321 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.699989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.700022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.700035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.716365 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.732329 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.733432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.733463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.733473 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.733494 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.733936 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744782 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744848 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744871 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744892 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.744955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745001 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745031 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745052 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745116 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745151 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745172 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745195 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.745252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846152 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846205 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846222 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846246 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846265 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846281 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846295 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846312 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846329 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846344 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846388 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846388 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846418 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846428 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846471 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846403 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846498 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846514 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846511 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846631 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846627 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846654 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846632 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846664 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846702 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846739 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846773 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.846879 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.934706 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.936129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.936164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.936174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:19 crc kubenswrapper[4893]: I0121 06:54:19.936198 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:19 crc kubenswrapper[4893]: E0121 06:54:19.936704 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.018343 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.035073 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.040806 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.054500 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3afa5342a65f018528cccb5d4322275f2f7a0231b18fb55c97ecf1fcaefa366a WatchSource:0}: Error finding container 3afa5342a65f018528cccb5d4322275f2f7a0231b18fb55c97ecf1fcaefa366a: Status 404 returned error can't find the container with id 3afa5342a65f018528cccb5d4322275f2f7a0231b18fb55c97ecf1fcaefa366a Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.058176 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.058559 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9fa1905e1cb479d9bd98b0a49bbfecd442b64644ec010b505397c93d8d3ce381 WatchSource:0}: Error finding container 9fa1905e1cb479d9bd98b0a49bbfecd442b64644ec010b505397c93d8d3ce381: Status 404 returned error can't find the container with id 9fa1905e1cb479d9bd98b0a49bbfecd442b64644ec010b505397c93d8d3ce381 Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.063688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.117591 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.322448 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.322547 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.337296 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.339447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.339490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.339499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.339524 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.340021 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.467067 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.467175 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.468322 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.468398 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.512356 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.520835 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:04:11.294109469 +0000 UTC Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.599302 4893 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba" exitCode=0 Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.599400 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.599541 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fc1ebc8184efab47a4e92f37fd70a1fb2cb45958454ffe48c32b66720a8d27b6"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.599631 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.600734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.600771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.600789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.601026 4893 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621" exitCode=0 Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.601137 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.601182 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"54aa15ec41d7e01814b3f7682a64ed5f125e579f81f0861f28fb60f97d5a2924"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.601273 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.602100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.602145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.602158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.603516 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.603548 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9fa1905e1cb479d9bd98b0a49bbfecd442b64644ec010b505397c93d8d3ce381"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.605212 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596" exitCode=0 Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.605290 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.605339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3afa5342a65f018528cccb5d4322275f2f7a0231b18fb55c97ecf1fcaefa366a"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.605439 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606269 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606834 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="403d1c43a2661de85c20ffce1e46a6096188cf01de52085a8af54d6c34c81442" exitCode=0 Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606859 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"403d1c43a2661de85c20ffce1e46a6096188cf01de52085a8af54d6c34c81442"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.606899 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a93211423b48e31c88e5746837358b59609c350b11b25735cf1cdc79ea0c914"} Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.607095 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.608304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.608335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.608344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.611195 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.612283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.612474 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:20 crc kubenswrapper[4893]: I0121 06:54:20.612576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:20 crc kubenswrapper[4893]: W0121 06:54:20.715102 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.715195 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:20 crc kubenswrapper[4893]: E0121 06:54:20.920583 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.181517 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.182888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.182924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.182933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.182955 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:21 crc kubenswrapper[4893]: E0121 06:54:21.183227 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.512540 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.521255 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:37:08.989915796 +0000 UTC Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.533449 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:54:21 crc kubenswrapper[4893]: E0121 06:54:21.534896 4893 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.682103 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.682163 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.682182 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.682329 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.690986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691171 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691456 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691499 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.691522 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.692165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.692192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.692201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.693867 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.693931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.693945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.705445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c2e997ed8bd5b7fab0fe1b66da5976a9672378d05f1f9a65506bcab990d83cb2"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.705697 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.708226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.708409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.708426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.702885 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c2e997ed8bd5b7fab0fe1b66da5976a9672378d05f1f9a65506bcab990d83cb2" exitCode=0 Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.712800 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa"} Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.713003 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.714366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.714394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.714406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:21 crc kubenswrapper[4893]: I0121 06:54:21.722738 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:22 crc kubenswrapper[4893]: W0121 06:54:22.088376 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:22 crc kubenswrapper[4893]: E0121 06:54:22.088458 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.513088 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.521407 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:18:13.459544418 +0000 UTC Jan 21 06:54:22 crc kubenswrapper[4893]: E0121 06:54:22.522061 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.719050 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b418b5f8bc439497980e6ea11ae5c04681ac90a308b9b6ab53239f5be683642" exitCode=0 Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.719131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b418b5f8bc439497980e6ea11ae5c04681ac90a308b9b6ab53239f5be683642"} Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.719314 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.720844 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.720881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.720895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.724433 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2"} Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.724510 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82"} Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.724522 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.724453 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725605 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725793 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.725818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.783522 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.784696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.784726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.784736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:22 crc kubenswrapper[4893]: I0121 06:54:22.784768 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.013658 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.013855 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.015220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.015248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.015258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.522084 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:13:11.582222069 +0000 UTC Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.729708 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ce09b5325f74596cc8b1629a3afcd408ced9bc43424e5772d05233750fe15769"} Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.729772 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"53cf5bdb67c6a6bcdc0aee492d5fe155566ebeba01ae5828b2be7ea544cca3c7"} Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.729802 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.729845 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.729851 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731218 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.731226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:23 crc kubenswrapper[4893]: I0121 06:54:23.741373 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.522276 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:27:50.352653753 +0000 UTC Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.545043 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737478 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62ee9b11cc86e2988a8873b935f1e70f16ca4f2dd2f03ed816fc024ed501e847"} Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737544 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737554 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4bf309f13845c853a8a44995f5e462f7915eeadb8cab6b65939166c12605dca3"} Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737608 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ef05ddbf37e3c5b8b3a4395442288eacc4a2808ba6d9aee69f290372bf463a9"} Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737622 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737640 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.737612 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.738975 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:24 crc kubenswrapper[4893]: I0121 06:54:24.739434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.505999 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.516116 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.522765 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:15:46.246987034 +0000 UTC Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.739637 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.739637 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.740590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.740618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.740625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.741227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.741246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.741254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:25 crc kubenswrapper[4893]: I0121 06:54:25.869110 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.307571 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.516389 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.516579 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.516658 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.519081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.519167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.519208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.522870 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:36:04.807067734 +0000 UTC Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.742144 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.742200 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:26 crc kubenswrapper[4893]: I0121 06:54:26.743553 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.523872 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:26:05.824750561 +0000 UTC Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.555353 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.555602 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.557252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.557308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:27 crc kubenswrapper[4893]: I0121 06:54:27.557321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.524510 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:30:55.649427369 +0000 UTC Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.571774 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.572015 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.573433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.573528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:28 crc kubenswrapper[4893]: I0121 06:54:28.573542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:29 crc kubenswrapper[4893]: I0121 06:54:29.525430 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:51:18.110618408 +0000 UTC Jan 21 06:54:29 crc kubenswrapper[4893]: E0121 06:54:29.641002 4893 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:54:30 crc kubenswrapper[4893]: I0121 06:54:30.526260 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:02:41.730218648 +0000 UTC Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.526804 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:37:08.776557252 +0000 UTC Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.571209 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.571522 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.578812 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.637258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.637339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.637361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.757745 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.759191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.759254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:31 crc kubenswrapper[4893]: I0121 06:54:31.759267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:32 crc kubenswrapper[4893]: I0121 06:54:32.527309 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:09:31.46831282 +0000 UTC Jan 21 06:54:32 crc kubenswrapper[4893]: E0121 06:54:32.786485 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 06:54:32 crc kubenswrapper[4893]: W0121 06:54:32.884906 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:54:32 crc kubenswrapper[4893]: I0121 06:54:32.885063 4893 trace.go:236] Trace[1404158857]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:54:22.883) (total time: 10001ms): Jan 21 06:54:32 crc kubenswrapper[4893]: Trace[1404158857]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:54:32.884) Jan 21 06:54:32 crc kubenswrapper[4893]: Trace[1404158857]: [10.001287803s] [10.001287803s] END Jan 21 06:54:32 crc kubenswrapper[4893]: E0121 06:54:32.885118 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 06:54:32 crc kubenswrapper[4893]: W0121 06:54:32.932932 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:54:32 crc kubenswrapper[4893]: I0121 06:54:32.933033 4893 trace.go:236] Trace[914934461]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:54:22.931) (total time: 10001ms): Jan 21 06:54:32 crc kubenswrapper[4893]: Trace[914934461]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:54:32.932) Jan 21 06:54:32 crc kubenswrapper[4893]: Trace[914934461]: [10.001725946s] [10.001725946s] END Jan 21 06:54:32 crc kubenswrapper[4893]: E0121 06:54:32.933057 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 06:54:33 crc kubenswrapper[4893]: W0121 06:54:33.378577 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.378689 4893 trace.go:236] Trace[980462319]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:54:23.376) (total time: 10001ms): Jan 21 06:54:33 crc kubenswrapper[4893]: Trace[980462319]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:54:33.378) Jan 21 06:54:33 crc kubenswrapper[4893]: Trace[980462319]: [10.001810588s] [10.001810588s] END Jan 21 06:54:33 crc kubenswrapper[4893]: E0121 06:54:33.378726 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.514341 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.528426 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:38:26.173720088 +0000 UTC Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.767413 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.769394 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2" exitCode=255 Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.769477 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2"} Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.769791 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.770880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.770948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.770965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:33 crc kubenswrapper[4893]: I0121 06:54:33.771861 4893 scope.go:117] "RemoveContainer" containerID="bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.528816 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:50:03.960023251 +0000 UTC Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.571339 4893 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.571465 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.774604 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.776409 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711"} Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.776685 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.777691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.777742 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:34 crc kubenswrapper[4893]: I0121 06:54:34.777756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.346553 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.346658 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.354899 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.354975 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.530050 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:18:49.719543427 +0000 UTC Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.987601 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.989502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.989596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.989614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:35 crc kubenswrapper[4893]: I0121 06:54:35.989662 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.440495 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.440705 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.443640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.443699 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.443710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.454754 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.522144 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]log ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]etcd ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/priority-and-fairness-filter ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-apiextensions-informers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-apiextensions-controllers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/crd-informer-synced ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-system-namespaces-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/bootstrap-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/start-kube-aggregator-informers ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-registration-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-discovery-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]autoregister-completion ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-openapi-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 21 06:54:36 crc kubenswrapper[4893]: livez check failed Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.522213 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.530189 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:01:43.758854445 +0000 UTC Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.851448 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.852486 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.852851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:36 crc kubenswrapper[4893]: I0121 06:54:36.852953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.531059 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:39:46.012048293 +0000 UTC Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.556339 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.556504 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.557908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.557973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.557993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.687464 4893 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:54:37 crc kubenswrapper[4893]: I0121 06:54:37.824896 4893 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.022632 4893 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.513471 4893 apiserver.go:52] "Watching apiserver" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.518297 4893 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.518617 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.519252 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.519258 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.519323 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:38 crc kubenswrapper[4893]: E0121 06:54:38.519416 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.519434 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.519835 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:38 crc kubenswrapper[4893]: E0121 06:54:38.519899 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.520070 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:38 crc kubenswrapper[4893]: E0121 06:54:38.520181 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.522087 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523082 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523084 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523104 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523389 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523497 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523528 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.523508 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.524879 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.531235 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:47:35.580423855 +0000 UTC Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.558127 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.569856 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.584085 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.596395 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.612169 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.615819 4893 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.624827 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:38 crc kubenswrapper[4893]: I0121 06:54:38.635473 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.531644 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:07:58.128455342 +0000 UTC Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.593574 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.607251 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.617530 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.628075 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.641838 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:39 crc kubenswrapper[4893]: I0121 06:54:39.656544 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.354140 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.356932 4893 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.357510 4893 trace.go:236] Trace[1524448022]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:54:27.316) (total time: 13041ms): Jan 21 06:54:40 crc kubenswrapper[4893]: Trace[1524448022]: ---"Objects listed" error: 13041ms (06:54:40.357) Jan 21 06:54:40 crc kubenswrapper[4893]: Trace[1524448022]: [13.041230823s] [13.041230823s] END Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.357530 4893 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.364246 4893 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457519 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457588 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457621 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457649 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457688 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457723 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457747 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457767 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457797 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.457844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458181 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458181 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458441 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458582 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458582 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458615 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458710 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458743 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458798 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458867 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458894 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458947 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.458994 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.459015 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.459036 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.459061 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.459652 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.460171 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.464786 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.464997 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.465779 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.465891 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.468156 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.545763 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546099 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546237 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546268 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546617 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546730 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:24:44.95618948 +0000 UTC Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.546751 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.467951 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.548302 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.549463 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.549018 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550450 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550510 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550537 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550558 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550576 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550597 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550617 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550638 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550658 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550782 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550811 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.550832 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.552792 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.553140 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.553738 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554074 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554244 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554358 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554655 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554774 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554827 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.554853 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.555080 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.555185 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.555491 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.556804 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.559935 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.559982 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.560479 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.560691 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.560858 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.560895 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.560925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561082 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561087 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561156 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561188 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561211 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561240 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562061 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562069 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.561306 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562184 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562206 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562226 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562247 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562264 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562328 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562347 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562363 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562411 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562428 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562443 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562459 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562474 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562490 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562506 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562525 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562540 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562555 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562573 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562589 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562608 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562655 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562696 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562712 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562727 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562743 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562759 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562777 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562792 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562809 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562901 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562918 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562934 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562949 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562966 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562983 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563003 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563018 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563033 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563048 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563065 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563162 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563181 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563197 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563215 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563231 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563247 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563261 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563275 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563290 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563325 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563363 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563380 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563397 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563411 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563451 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563467 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563484 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563501 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563517 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563543 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563558 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563575 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563591 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563606 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563622 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563638 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563662 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563712 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563727 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563742 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563758 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563773 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563821 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563838 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563854 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563873 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563890 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563907 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563923 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563940 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563955 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563971 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564131 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564150 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564170 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564187 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564203 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564218 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564234 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564250 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564265 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564282 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562221 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562228 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562111 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562525 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562597 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.562955 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563036 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563424 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563463 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563776 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.563857 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564208 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564245 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564411 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564760 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564956 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.565015 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.565652 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.566163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.566596 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.566642 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.566920 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567081 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567308 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567351 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567429 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567463 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567631 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567835 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.567896 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.568499 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.568811 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.569993 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570003 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570320 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570578 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.564298 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570833 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570859 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570882 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570901 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570916 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570933 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570950 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570980 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.570999 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571031 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571048 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571065 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571082 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571097 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571119 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571137 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571175 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571193 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571165 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571216 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571273 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571303 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571337 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571381 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571425 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571456 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571479 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571504 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571527 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571552 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571585 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571610 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571637 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571661 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571724 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571746 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571770 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571881 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571912 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571937 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571963 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571984 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572020 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572042 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572065 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572091 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572114 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572142 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572163 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572184 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572208 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572279 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572304 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572423 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573330 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573373 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573421 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573449 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573494 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573518 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573540 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573604 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573783 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573818 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573909 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573997 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574130 4893 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574151 4893 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575379 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575398 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575410 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575423 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575438 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575450 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575480 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575494 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575506 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575518 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575530 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575560 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575606 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575635 4893 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575647 4893 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575663 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575702 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575714 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575726 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575739 4893 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575751 4893 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575762 4893 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575778 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575822 4893 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575833 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575845 4893 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575858 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576339 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576354 4893 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576366 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576379 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576392 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576405 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576421 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576434 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576446 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576457 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576468 4893 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576480 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576490 4893 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576503 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576513 4893 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576524 4893 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576536 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576549 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576560 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576578 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576590 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576601 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576612 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576623 4893 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576634 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576646 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576659 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576685 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576697 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576708 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576719 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576730 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576741 4893 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576753 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576763 4893 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576776 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576787 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576798 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576809 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576821 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576832 4893 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576845 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576856 4893 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576867 4893 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576878 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576889 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576900 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.582520 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.582644 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583011 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.583113 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.583217 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.584685 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.585536 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.585977 4893 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571499 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571584 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601144 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571609 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571877 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571913 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.571966 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572073 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572118 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572483 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572634 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572844 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573011 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573091 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.572980 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573342 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573423 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573443 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573838 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573843 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573998 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574120 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574133 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574358 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574424 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574631 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574649 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574768 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574911 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.574922 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575020 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575168 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.573796 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575283 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575584 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575624 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575776 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575846 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.575920 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.576945 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.577152 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.577615 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.577816 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.579502 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.579591 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.579730 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.579767 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.580295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.581277 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.581413 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.581933 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.582512 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.582817 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.582899 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.582935 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583002 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583157 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583358 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583432 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583438 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583504 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583637 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583654 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.583912 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.584140 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.584203 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.584494 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.584844 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.585478 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.593530 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.593927 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.593959 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.593997 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594157 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594179 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594258 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594426 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594445 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594563 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594536 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594646 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.594870 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595023 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595055 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595075 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595079 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595433 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.595480 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599127 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599166 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599280 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599303 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599449 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599519 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599684 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599703 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599851 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.599879 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600044 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600058 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600089 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600238 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600261 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600454 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600915 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.600917 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601195 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601772 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601000 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.602874 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:54:41.102840562 +0000 UTC m=+22.333186464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.603080 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.603323 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603548 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603571 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603585 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603644 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:41.103627604 +0000 UTC m=+22.333973506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603725 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:41.103715727 +0000 UTC m=+22.334061789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.603835 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:41.103805109 +0000 UTC m=+22.334151071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601430 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.601542 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.606293 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.606433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.607746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.620808 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.621690 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.631258 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.631295 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.631312 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.631372 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:41.13135338 +0000 UTC m=+22.361699282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.632830 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.640909 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.641180 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.642509 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.677870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.677939 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.677985 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.677995 4893 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678004 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678013 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678021 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678030 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678038 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678047 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678055 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678063 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678071 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678079 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678087 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678096 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678105 4893 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678113 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678120 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678130 4893 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678141 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678150 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678161 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678171 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678181 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678190 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678201 4893 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678210 4893 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678220 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678231 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678241 4893 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678251 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678261 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678271 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678279 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678287 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678295 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678303 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678312 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678320 4893 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678328 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678337 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678346 4893 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678354 4893 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678362 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678380 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678389 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678397 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678405 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678414 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678421 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678430 4893 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678439 4893 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678447 4893 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678455 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678464 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678472 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678480 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678487 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678495 4893 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678503 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678511 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678518 4893 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678527 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678535 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678543 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678551 4893 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678559 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678568 4893 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678575 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678583 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678591 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678599 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678607 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678615 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678623 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678632 4893 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678641 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678651 4893 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678662 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678690 4893 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678702 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678713 4893 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678725 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678735 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678909 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678922 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678934 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678944 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678954 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678966 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678977 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678987 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.678997 4893 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679006 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679015 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679070 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679083 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679094 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679104 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679112 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679121 4893 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679128 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679136 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679144 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679153 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679161 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679168 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679176 4893 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679184 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679192 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679201 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679210 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679218 4893 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679226 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679234 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679426 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.679476 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.680385 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.680583 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.683984 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.691286 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.779993 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.780026 4893 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.780038 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.780050 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.863011 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.863548 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.865609 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" exitCode=255 Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.865723 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711"} Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.865832 4893 scope.go:117] "RemoveContainer" containerID="bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.879189 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.879248 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.879163 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: E0121 06:54:40.879392 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.886473 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484"} Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.886549 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4e03a1acf6124a2f65f9c8e8b56f939e46abf12529ee8d4bc739110a02dd0543"} Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.891313 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.903570 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.913933 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.926204 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.934521 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.935656 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:40 crc kubenswrapper[4893]: I0121 06:54:40.955402 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:54:40 crc kubenswrapper[4893]: W0121 06:54:40.958728 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-59125df3335f220201e2b885bda069d917632709753a1bb33dee0de75ff0d0ef WatchSource:0}: Error finding container 59125df3335f220201e2b885bda069d917632709753a1bb33dee0de75ff0d0ef: Status 404 returned error can't find the container with id 59125df3335f220201e2b885bda069d917632709753a1bb33dee0de75ff0d0ef Jan 21 06:54:40 crc kubenswrapper[4893]: W0121 06:54:40.978104 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-67496e82f629c7893c4e4b3376f1677b0018744bc044be4fb964ba2fd3b3fed5 WatchSource:0}: Error finding container 67496e82f629c7893c4e4b3376f1677b0018744bc044be4fb964ba2fd3b3fed5: Status 404 returned error can't find the container with id 67496e82f629c7893c4e4b3376f1677b0018744bc044be4fb964ba2fd3b3fed5 Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.098197 4893 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.098297 4893 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.135300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.135722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.135815 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.135930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.136025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.183996 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.184106 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.184164 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184209 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:54:42.184183209 +0000 UTC m=+23.414529111 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.184256 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184300 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184324 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184337 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184384 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184392 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:42.184374895 +0000 UTC m=+23.414720867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.184303 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184420 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:42.184412886 +0000 UTC m=+23.414758778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184456 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184504 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:42.184494078 +0000 UTC m=+23.414840040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184563 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184605 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184623 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184014 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.184746 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:42.184719364 +0000 UTC m=+23.415065266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.190904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.190947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.190958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.190977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.190988 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.248949 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.264180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.264223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.264235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.264255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.264266 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.279648 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.284830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.284883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.284894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.284912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.284924 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.299236 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.309270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.309304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.309313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.309328 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.309337 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.321446 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.321572 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.322987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.323013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.323021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.323036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.323044 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.454377 4893 csr.go:261] certificate signing request csr-hw887 is approved, waiting to be issued Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.457726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.457781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.457790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.457805 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.457816 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.478243 4893 csr.go:257] certificate signing request csr-hw887 is issued Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.526653 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.565960 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.576939 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:26:40.133419461 +0000 UTC Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.576881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.579947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.579965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.579985 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.580006 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.584348 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.585230 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.586150 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.587618 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.588400 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.589537 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.590363 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.591286 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.592579 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.593221 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.593719 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.594444 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.595329 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.596535 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.597191 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.598454 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.599111 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.599773 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.600622 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.601397 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.602134 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.603282 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.604010 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.605442 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.607148 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.607886 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.609467 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.610753 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.611321 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.612569 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.613506 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.613988 4893 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.614100 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.616402 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.617049 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.617737 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.620283 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.621259 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.621958 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.623246 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.624160 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.626302 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.627248 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.628836 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.632169 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.633205 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.634091 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.634687 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.635899 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.636873 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.637773 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.638454 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.639028 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.640019 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.640727 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.641822 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.642357 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.650399 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.666759 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.683076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.683124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.683134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.683150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.683160 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.684354 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:33Z\\\",\\\"message\\\":\\\"W0121 06:54:22.202797 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 06:54:22.203110 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768978462 cert, and key in /tmp/serving-cert-2562407844/serving-signer.crt, /tmp/serving-cert-2562407844/serving-signer.key\\\\nI0121 06:54:22.743072 1 observer_polling.go:159] Starting file observer\\\\nW0121 06:54:22.751820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 06:54:22.752085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:22.755229 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2562407844/tls.crt::/tmp/serving-cert-2562407844/tls.key\\\\\\\"\\\\nF0121 06:54:33.498565 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.707732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.713013 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.724460 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.761735 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.794514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.794567 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.794578 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.794603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.794623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.838708 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.891515 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.891623 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"59125df3335f220201e2b885bda069d917632709753a1bb33dee0de75ff0d0ef"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.893898 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.895208 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.896321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.896353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.896363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.896378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.896391 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:41Z","lastTransitionTime":"2026-01-21T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.897498 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.897712 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.898952 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.900582 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"67496e82f629c7893c4e4b3376f1677b0018744bc044be4fb964ba2fd3b3fed5"} Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.903786 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.964315 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:41 crc kubenswrapper[4893]: E0121 06:54:41.970258 4893 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:54:41 crc kubenswrapper[4893]: I0121 06:54:41.984165 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2fb23bc37ebbcfd3d2d178127a7b67f41facf5aea30579cb77d2eabe943ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:33Z\\\",\\\"message\\\":\\\"W0121 06:54:22.202797 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 06:54:22.203110 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768978462 cert, and key in /tmp/serving-cert-2562407844/serving-signer.crt, /tmp/serving-cert-2562407844/serving-signer.key\\\\nI0121 06:54:22.743072 1 observer_polling.go:159] Starting file observer\\\\nW0121 06:54:22.751820 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 06:54:22.752085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:22.755229 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2562407844/tls.crt::/tmp/serving-cert-2562407844/tls.key\\\\\\\"\\\\nF0121 06:54:33.498565 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.000889 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.003016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.003174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.003255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.003344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.003465 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.014719 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.031330 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.047031 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.067375 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.083016 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.101398 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.106885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.106918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.106931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.106946 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.106959 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.122021 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.135824 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.164510 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191594 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191693 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.191760 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:54:44.191741927 +0000 UTC m=+25.422087829 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191792 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191822 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191843 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.191870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.191910 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.191962 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:44.191950202 +0000 UTC m=+25.422296104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.191994 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192030 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192064 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:44.192053485 +0000 UTC m=+25.422399387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192067 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192086 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192142 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192202 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192219 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192152 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:44.192129338 +0000 UTC m=+25.422475300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.192347 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:44.192302982 +0000 UTC m=+25.422648964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.209839 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.209885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.209895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.209911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.209923 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.312928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.312965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.312975 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.312989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.313000 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.416062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.416095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.416105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.416121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.416131 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.455391 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-h28gn"] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.461377 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.463165 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-42mq5"] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.463709 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-m8k4g"] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.463874 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hg78p"] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.464564 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.464612 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.464570 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.465997 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.466228 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.466341 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.466472 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.466594 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.469390 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.469547 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.475549 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.475856 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.475921 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.476211 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.476261 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.476412 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.476444 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.476596 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.481789 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 06:49:41 +0000 UTC, rotation deadline is 2026-11-24 02:44:51.773305242 +0000 UTC Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.481839 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7363h50m9.291469313s for next certificate rotation Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496302 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-socket-dir-parent\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496334 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-multus\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496355 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-daemon-config\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496388 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496407 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grm4n\" (UniqueName: \"kubernetes.io/projected/5cc8e905-b368-49e8-adfa-31890665e5ae-kube-api-access-grm4n\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496435 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cnibin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496451 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-os-release\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496473 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-netns\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496491 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cc8e905-b368-49e8-adfa-31890665e5ae-hosts-file\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496505 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cni-binary-copy\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496519 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-os-release\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496539 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-etc-kubernetes\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496571 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-kubelet\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496630 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-binary-copy\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496731 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-k8s-cni-cncf-io\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496777 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-system-cni-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496812 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-bin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496866 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-hostroot\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496881 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-rootfs\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496899 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcm7\" (UniqueName: \"kubernetes.io/projected/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-kube-api-access-jwcm7\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496915 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-system-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496931 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2qn9\" (UniqueName: \"kubernetes.io/projected/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-kube-api-access-n2qn9\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496948 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-proxy-tls\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496967 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-multus-certs\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496984 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.496999 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshbf\" (UniqueName: \"kubernetes.io/projected/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-kube-api-access-cshbf\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.497027 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.497046 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-conf-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.497060 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cnibin\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.497075 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.505402 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.520525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.520581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.520590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.520618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.520628 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.532202 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.548783 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.564271 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.577658 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.579698 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:39:20.82805182 +0000 UTC Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.579896 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.579906 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.579984 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.579997 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.580101 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.580174 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.591976 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.597894 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-multus\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.597945 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-daemon-config\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.597965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.597982 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-socket-dir-parent\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.597998 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grm4n\" (UniqueName: \"kubernetes.io/projected/5cc8e905-b368-49e8-adfa-31890665e5ae-kube-api-access-grm4n\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598016 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-netns\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598034 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cc8e905-b368-49e8-adfa-31890665e5ae-hosts-file\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598063 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cnibin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598081 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-os-release\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598101 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cni-binary-copy\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598121 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-os-release\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598150 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-etc-kubernetes\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-kubelet\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-binary-copy\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598206 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-k8s-cni-cncf-io\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-system-cni-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-bin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-hostroot\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-rootfs\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598315 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcm7\" (UniqueName: \"kubernetes.io/projected/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-kube-api-access-jwcm7\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598334 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-system-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598353 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2qn9\" (UniqueName: \"kubernetes.io/projected/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-kube-api-access-n2qn9\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598373 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-proxy-tls\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-multus-certs\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598416 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cshbf\" (UniqueName: \"kubernetes.io/projected/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-kube-api-access-cshbf\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598467 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598490 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cnibin\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598511 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598533 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-conf-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598622 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-conf-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.598665 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-multus\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.599408 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-daemon-config\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600035 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600217 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-socket-dir-parent\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600525 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-netns\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600584 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5cc8e905-b368-49e8-adfa-31890665e5ae-hosts-file\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600630 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cnibin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600735 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-rootfs\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600816 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-multus-certs\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.600858 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-system-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601056 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-multus-cni-dir\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601302 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-cni-binary-copy\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601380 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-os-release\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601418 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-etc-kubernetes\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-kubelet\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601556 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-system-cni-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601660 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-run-k8s-cni-cncf-io\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601733 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-hostroot\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601771 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-host-var-lib-cni-bin\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601832 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cnibin\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.601971 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.602041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-cni-binary-copy\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.602085 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.602116 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-os-release\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.608193 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-proxy-tls\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.614366 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.616455 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cshbf\" (UniqueName: \"kubernetes.io/projected/708c6ae7-fdf7-44d1-ae88-f6abbb247f93-kube-api-access-cshbf\") pod \"multus-additional-cni-plugins-h28gn\" (UID: \"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\") " pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.616645 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcm7\" (UniqueName: \"kubernetes.io/projected/ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a-kube-api-access-jwcm7\") pod \"machine-config-daemon-hg78p\" (UID: \"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\") " pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.618549 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grm4n\" (UniqueName: \"kubernetes.io/projected/5cc8e905-b368-49e8-adfa-31890665e5ae-kube-api-access-grm4n\") pod \"node-resolver-42mq5\" (UID: \"5cc8e905-b368-49e8-adfa-31890665e5ae\") " pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.625544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.625581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.625590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.625607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.625622 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.629726 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2qn9\" (UniqueName: \"kubernetes.io/projected/ecb64775-90e7-43a2-a5a8-4d73e348dcc4-kube-api-access-n2qn9\") pod \"multus-m8k4g\" (UID: \"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\") " pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.635504 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.659732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.672175 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.683276 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.696642 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.712646 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.724194 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.728395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.728434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.728445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.728462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.728475 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.737535 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.751133 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.769060 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.776631 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h28gn" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.784354 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.789941 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-42mq5" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.796716 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m8k4g" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.802787 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.803882 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.825077 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.831381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.831421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.831431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.831448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.831458 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.839298 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.840387 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzsg6"] Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.841347 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.843991 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.844053 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.844599 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.845423 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.845541 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.846167 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.846400 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.861206 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.877097 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.892481 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900330 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900352 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900375 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900394 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900415 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900433 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900462 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900479 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900500 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900517 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900535 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900553 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900572 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900592 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900612 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900640 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcrt\" (UniqueName: \"kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900662 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900701 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.900733 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.908573 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.943172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.943223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.943237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.943255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.943274 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:42Z","lastTransitionTime":"2026-01-21T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.948746 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-42mq5" event={"ID":"5cc8e905-b368-49e8-adfa-31890665e5ae","Type":"ContainerStarted","Data":"a2d1a3088537cfc47a4180f2072669ddbaabe772c5f207111169f38f449d7583"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.949807 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.952557 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerStarted","Data":"7f57e855e5091c054dfffc243e24a1edf8ce4f76bd3f79d583788eb7e2e75e8c"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.957590 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerStarted","Data":"a7fecacf789805324377f6c62800019a6810cd515e0fa30e641c4913d4f11b4e"} Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.969965 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:54:42 crc kubenswrapper[4893]: E0121 06:54:42.971076 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 06:54:42 crc kubenswrapper[4893]: I0121 06:54:42.996058 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004082 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004125 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004157 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004172 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004187 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004204 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004230 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxcrt\" (UniqueName: \"kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004265 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004291 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004306 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004326 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004340 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004353 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004379 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004410 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004430 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004444 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004459 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004512 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.004866 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.005443 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.005817 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.005869 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.005898 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.005925 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006139 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006179 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006575 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006608 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006632 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006702 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006724 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006762 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006782 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.006809 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.007417 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.021202 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.029064 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.036132 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxcrt\" (UniqueName: \"kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt\") pod \"ovnkube-node-qzsg6\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.047283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.047334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.047344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.047357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.047365 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.151783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.151819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.151830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.151846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.151856 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.152156 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.177745 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.197810 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.213399 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.220302 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.242265 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.259281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.259473 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.259569 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.259702 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.259799 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.263882 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.363546 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.363579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.363588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.363601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.363610 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.466212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.466245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.466255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.466268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.466277 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.570010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.570057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.570068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.570087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.570099 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.579792 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:44:47.807517902 +0000 UTC Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.673080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.673115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.673125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.673140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.673150 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.775791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.775825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.775834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.775850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.775862 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.885694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.885731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.885742 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.885762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.885776 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.970439 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerStarted","Data":"1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.972469 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-42mq5" event={"ID":"5cc8e905-b368-49e8-adfa-31890665e5ae","Type":"ContainerStarted","Data":"49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.973653 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" exitCode=0 Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.973701 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.973757 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"fdfc30e4324f373ef418e02201a091fc892f0100545a2099c061bd374aba586a"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.984649 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.986060 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerStarted","Data":"485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.991379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.991423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.991434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.991450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.991462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:43Z","lastTransitionTime":"2026-01-21T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.992256 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:54:43 crc kubenswrapper[4893]: E0121 06:54:43.992394 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.992393 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.992579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75"} Jan 21 06:54:43 crc kubenswrapper[4893]: I0121 06:54:43.992651 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"8e0b88077a73d5d2aa6987aadb10fff39a7b11d6be8f67bcbeb8a145072efcc0"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.010828 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.094203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.094260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.094271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.094291 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.094303 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.119608 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.141231 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.160434 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.179801 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.207229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.207270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.207282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.207299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.207311 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.212687 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.222517 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.222618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.222649 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.222693 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.222727 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.222862 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.222883 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.222895 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.222947 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:48.222930886 +0000 UTC m=+29.453276798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223215 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223318 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223425 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223430 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223393 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:54:48.223380868 +0000 UTC m=+29.453726780 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223779 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:48.223764569 +0000 UTC m=+29.454110471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223895 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:48.223881893 +0000 UTC m=+29.454227795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.223331 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.224112 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:48.224103069 +0000 UTC m=+29.454448971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.234433 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.257119 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.278369 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.290591 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.304038 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.316818 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.328215 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.337821 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.337865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.337876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.337891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.337902 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.340491 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.359488 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.407395 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.423945 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.452019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.452075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.452092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.452121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.452132 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.457087 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.479124 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.572870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.572910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.572921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.572944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.572959 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.580033 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:50:41.076503689 +0000 UTC Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.580137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.580156 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.580181 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.581425 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.581446 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:44 crc kubenswrapper[4893]: E0121 06:54:44.581379 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.595176 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.610440 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.627516 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.641961 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.659805 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675187 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675392 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.675407 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.694297 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.777967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.778209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.778221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.778240 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.778252 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.880871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.880912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.880922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.880938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:44 crc kubenswrapper[4893]: I0121 06:54:44.880949 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.989786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.989837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.989850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.989871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.989886 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:44Z","lastTransitionTime":"2026-01-21T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.997014 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb" exitCode=0 Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:44.997130 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.002707 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.002792 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.002811 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.019029 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.033333 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.059890 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.076947 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097264 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097464 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097475 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097494 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.097506 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.126795 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.153439 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.179968 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.191876 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.201575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.201665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.201783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.201801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.201811 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.205413 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.253878 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.274202 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.387655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.387717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.387729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.387753 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.387765 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.493502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.493536 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.493544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.493557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.493567 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.497192 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.581518 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:11:21.146643165 +0000 UTC Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.596123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.596162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.596176 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.596192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.596203 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.745866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.745888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.745895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.745907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.745915 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.848491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.848826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.848838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.848857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.848869 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.965234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.965306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.965317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.965334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.965344 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:45Z","lastTransitionTime":"2026-01-21T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.968128 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wlrc6"] Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.968524 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.970464 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.970661 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.970844 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.974216 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:54:45 crc kubenswrapper[4893]: I0121 06:54:45.983856 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.006139 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerStarted","Data":"06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.008379 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.008408 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.024040 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.037714 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.047715 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.073306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.073349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.073362 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.073381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.073394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.082981 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.096803 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e26ce1b-e6f7-4612-aa11-69ad21c97870-serviceca\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.096888 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65k5\" (UniqueName: \"kubernetes.io/projected/8e26ce1b-e6f7-4612-aa11-69ad21c97870-kube-api-access-j65k5\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.096928 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e26ce1b-e6f7-4612-aa11-69ad21c97870-host\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.107439 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.122321 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.138584 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.154093 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.166376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.175485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.175521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.175529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.175544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.175553 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.194385 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.197966 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65k5\" (UniqueName: \"kubernetes.io/projected/8e26ce1b-e6f7-4612-aa11-69ad21c97870-kube-api-access-j65k5\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.198205 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e26ce1b-e6f7-4612-aa11-69ad21c97870-host\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.198318 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e26ce1b-e6f7-4612-aa11-69ad21c97870-serviceca\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.198376 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e26ce1b-e6f7-4612-aa11-69ad21c97870-host\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.199234 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e26ce1b-e6f7-4612-aa11-69ad21c97870-serviceca\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.210384 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.224499 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65k5\" (UniqueName: \"kubernetes.io/projected/8e26ce1b-e6f7-4612-aa11-69ad21c97870-kube-api-access-j65k5\") pod \"node-ca-wlrc6\" (UID: \"8e26ce1b-e6f7-4612-aa11-69ad21c97870\") " pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.231053 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.249590 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.267733 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.277622 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.277666 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.277690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.277707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.277718 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.283330 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.295831 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.313806 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.325810 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.335840 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.355013 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.367031 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.378639 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.385113 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wlrc6" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.394206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.394426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.394543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.394665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.394818 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.408999 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: W0121 06:54:46.415940 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e26ce1b_e6f7_4612_aa11_69ad21c97870.slice/crio-5c2a98a47c61ea677d6e37537d345deeaa3a62937c9f3d798c6f6a83a0fc64f7 WatchSource:0}: Error finding container 5c2a98a47c61ea677d6e37537d345deeaa3a62937c9f3d798c6f6a83a0fc64f7: Status 404 returned error can't find the container with id 5c2a98a47c61ea677d6e37537d345deeaa3a62937c9f3d798c6f6a83a0fc64f7 Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.427123 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.443742 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.498105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.498370 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.498431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.498560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.498621 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.561566 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.584645 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:05:33.544255496 +0000 UTC Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.584837 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.585880 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.585923 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:46 crc kubenswrapper[4893]: E0121 06:54:46.585976 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.585881 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:46 crc kubenswrapper[4893]: E0121 06:54:46.586154 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:46 crc kubenswrapper[4893]: E0121 06:54:46.586346 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.600901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.600926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.600934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.600967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.600976 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.703047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.703084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.703093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.703108 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.703118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.734750 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.735450 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:54:46 crc kubenswrapper[4893]: E0121 06:54:46.735643 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.804722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.804765 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.804775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.804790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.804801 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.906757 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.906790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.906799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.906816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:46 crc kubenswrapper[4893]: I0121 06:54:46.906826 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:46Z","lastTransitionTime":"2026-01-21T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.009574 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.009609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.009619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.009635 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.009647 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.045809 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.048050 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214" exitCode=0 Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.048124 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.050477 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wlrc6" event={"ID":"8e26ce1b-e6f7-4612-aa11-69ad21c97870","Type":"ContainerStarted","Data":"64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.050528 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wlrc6" event={"ID":"8e26ce1b-e6f7-4612-aa11-69ad21c97870","Type":"ContainerStarted","Data":"5c2a98a47c61ea677d6e37537d345deeaa3a62937c9f3d798c6f6a83a0fc64f7"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.068008 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.082434 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.094390 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.108458 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.112201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.112230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.112239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.112255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.112265 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.126434 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.135439 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.153764 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.167792 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.180056 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.192502 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.206496 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215270 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.215934 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.225505 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.236278 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.250263 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.269907 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.286011 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.298935 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.313121 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.318207 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.318236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.318245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.318258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.318267 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.327050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.337100 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.349196 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.362623 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.375289 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.389693 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.403614 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.416322 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.419926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.419960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.419969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.419985 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.419994 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.429241 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.522641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.522695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.522704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.522719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.522731 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.584770 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:07:02.946458105 +0000 UTC Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.625682 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.625717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.625728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.625746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.625757 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.727989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.728028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.728037 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.728051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.728060 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.830084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.830408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.830428 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.830449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.830462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.933216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.933250 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.933258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.933272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:47 crc kubenswrapper[4893]: I0121 06:54:47.933281 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:47Z","lastTransitionTime":"2026-01-21T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.035088 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.035137 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.035148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.035165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.035178 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.056622 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e" exitCode=0 Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.056694 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.068888 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.088770 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.100362 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.114973 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.136895 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.138094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.138124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.138139 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.138156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.138166 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.151224 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.163756 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.180055 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.196312 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.211095 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.224357 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.234584 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.240057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.240295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.240383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.240600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.240728 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.248801 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.248993 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:54:56.248963954 +0000 UTC m=+37.479309886 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.249045 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.249087 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.249065 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249210 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.249114 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249214 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249252 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249276 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:56.249267233 +0000 UTC m=+37.479613135 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249278 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249287 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.249261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249301 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249350 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249307 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249410 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:56.249389206 +0000 UTC m=+37.479735159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249428 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:56.249419057 +0000 UTC m=+37.479764959 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.249443 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:56.249437078 +0000 UTC m=+37.479783050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.264655 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.343252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.343283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.343290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.343303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.343312 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.445810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.445851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.445863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.445881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.445891 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.548448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.548483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.548493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.548507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.548516 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.580067 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.580133 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.580161 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.580194 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.580320 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:48 crc kubenswrapper[4893]: E0121 06:54:48.580412 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.585684 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:01:31.185861301 +0000 UTC Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.651030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.651081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.651093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.651112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.651123 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.753429 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.753464 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.753473 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.753488 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.753497 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.856422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.856481 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.856498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.856519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.856533 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.959537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.959578 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.959591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.959611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:48 crc kubenswrapper[4893]: I0121 06:54:48.959623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:48Z","lastTransitionTime":"2026-01-21T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.061830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.061857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.061867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.061880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.061890 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.066209 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d" exitCode=0 Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.066305 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.072758 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.081523 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.098050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.109641 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.123821 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.135938 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.148840 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.165002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.165045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.165066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.165083 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.165095 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.166537 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.181098 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.195748 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.209721 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.222219 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.238903 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.250342 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.262404 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.267524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.267560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.267570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.267584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.267594 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.369778 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.369814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.369824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.369837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.369847 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.423366 4893 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.533336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.533366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.533374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.533386 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.533394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.585916 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:44:09.749887991 +0000 UTC Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.591942 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.608872 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.622626 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.634416 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.635736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.635780 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.635792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.635813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.635826 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.647426 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.662373 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.671787 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.692285 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.708871 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.722407 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738260 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.738932 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.750901 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.769557 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.780488 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.841076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.841107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.841117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.841132 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.841143 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.943354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.943419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.943431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.943447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:49 crc kubenswrapper[4893]: I0121 06:54:49.943459 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:49Z","lastTransitionTime":"2026-01-21T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.045885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.045916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.045926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.045943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.045954 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.078799 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerStarted","Data":"08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.090652 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.100611 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.112081 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.128005 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.138526 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.150684 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.150754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.150767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.150783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.150819 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.152026 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.170563 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.187291 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.199447 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.211033 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.222945 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.235717 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.250534 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.253797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.253834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.253845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.253861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.253872 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.263806 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.356573 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.356615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.356628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.356644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.356655 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.459517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.459854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.459866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.459893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.459904 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.630364 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.630367 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.630328 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:49:46.085131175 +0000 UTC Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.630506 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:50 crc kubenswrapper[4893]: E0121 06:54:50.630932 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:50 crc kubenswrapper[4893]: E0121 06:54:50.631034 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:50 crc kubenswrapper[4893]: E0121 06:54:50.631175 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.632321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.632353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.632363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.632391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.632496 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.852838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.852867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.852888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.852908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.852918 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.982972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.983042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.983054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.983082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:50 crc kubenswrapper[4893]: I0121 06:54:50.983093 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:50Z","lastTransitionTime":"2026-01-21T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.098542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.098582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.098593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.098609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.098621 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.106077 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.106498 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.120057 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.134936 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.149981 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.160886 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.175255 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.190631 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.198582 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.200320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.200349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.200360 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.200377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.200388 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.203341 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.217725 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.228508 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.238943 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.260312 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.275081 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.287734 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.299616 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.302317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.302362 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.302372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.302388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.302399 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.310540 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.333777 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.349984 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.363576 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.377777 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.388428 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.388483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.388501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.388527 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.388541 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.393202 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.401041 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.405258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.405300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.405311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.405329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.405341 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.407388 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.420886 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.423106 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.424077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.424109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.424119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.424135 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.424147 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.436653 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.437067 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.439903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.439931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.439941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.439957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.439967 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.447865 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.455550 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.459485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.459534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.459544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.459629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.459642 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.462738 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.471410 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: E0121 06:54:51.471580 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.472931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.472951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.472960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.472973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.472982 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.475503 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.486867 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.496944 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.575868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.575939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.575964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.575998 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.576023 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.632731 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:58:36.370416298 +0000 UTC Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.677994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.678030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.678041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.678058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.678067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.754562 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.821724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.821776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.821788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.821806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.821817 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.928495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.928529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.928537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.928551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:51 crc kubenswrapper[4893]: I0121 06:54:51.928560 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:51Z","lastTransitionTime":"2026-01-21T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.030878 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.030910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.030918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.030932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.030941 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.121503 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736" exitCode=0 Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.122627 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.123212 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.134439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.134490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.134503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.134523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.134540 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.184307 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.198113 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.201856 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.209663 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.222507 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.234617 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.237281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.237312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.237323 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.237338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.237349 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.244852 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.258910 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.274854 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.288792 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.304119 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.316251 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.327288 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.339769 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.339801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.339812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.339828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.339839 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.412322 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.432312 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.441772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.441798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.441806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.441820 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.441829 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.450815 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.466707 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.486337 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.498624 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.512527 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.524580 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.541106 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.543646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.543708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.543818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.543834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.543843 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.561346 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.576217 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.579949 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.579991 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.579952 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:52 crc kubenswrapper[4893]: E0121 06:54:52.580108 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:52 crc kubenswrapper[4893]: E0121 06:54:52.580225 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:52 crc kubenswrapper[4893]: E0121 06:54:52.580324 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.592306 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.604617 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.615546 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.633742 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:52:43.437643994 +0000 UTC Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.634662 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.647573 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.647616 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.647626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.647642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.647651 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.654488 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.750709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.750782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.750800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.750826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.750844 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.853539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.853592 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.853603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.853620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.853632 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.956303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.956582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.956602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.956632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:52 crc kubenswrapper[4893]: I0121 06:54:52.956651 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:52Z","lastTransitionTime":"2026-01-21T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.059963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.060032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.060051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.060078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.060097 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.128175 4893 generic.go:334] "Generic (PLEG): container finished" podID="708c6ae7-fdf7-44d1-ae88-f6abbb247f93" containerID="4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024" exitCode=0 Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.128259 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerDied","Data":"4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.143476 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.285510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.285552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.285560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.285574 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.285584 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.298811 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.364906 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.376011 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.384058 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.387812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.387855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.387867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.387881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.387891 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.394352 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.405096 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.415070 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.430631 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.439615 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.450124 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.467278 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.479340 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.490455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.490484 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.490493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.490507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.490516 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.491029 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:53Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.601210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.601281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.601295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.601315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.601374 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.633985 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:45:51.540626098 +0000 UTC Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.704605 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.704646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.704658 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.704697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.704710 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.807798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.807885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.807917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.807949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.807972 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.911364 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.911393 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.911404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.911420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:53 crc kubenswrapper[4893]: I0121 06:54:53.911431 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:53Z","lastTransitionTime":"2026-01-21T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.013849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.013902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.013915 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.013931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.013942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.116899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.116945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.116955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.116970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.116979 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.137489 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" event={"ID":"708c6ae7-fdf7-44d1-ae88-f6abbb247f93","Type":"ContainerStarted","Data":"ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.154261 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.173200 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.185340 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.202826 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.217092 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.219185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.219227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.219243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.219263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.219277 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.229812 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.242780 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.254334 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.264390 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.282150 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.294367 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.305040 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.314908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.321170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.321226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.321238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.321257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.321610 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.329259 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:54Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.423630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.423657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.423664 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.423706 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.423724 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.526119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.526169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.526183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.526202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.526217 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.580171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.580171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:54 crc kubenswrapper[4893]: E0121 06:54:54.580328 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.580448 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:54 crc kubenswrapper[4893]: E0121 06:54:54.580549 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:54 crc kubenswrapper[4893]: E0121 06:54:54.580730 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.628940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.628985 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.628995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.629009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.629047 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.634322 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:01:24.134533887 +0000 UTC Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.731565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.731602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.731611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.731628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.731639 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.844577 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.844617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.844628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.844646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.844657 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.954066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.954203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.954212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.954228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:54 crc kubenswrapper[4893]: I0121 06:54:54.954240 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:54Z","lastTransitionTime":"2026-01-21T06:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.079776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.079812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.079824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.079860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.079873 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.102576 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6"] Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.103242 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.106133 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.107051 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.119880 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.131628 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.145325 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.160043 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.172207 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.183151 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.183231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.183246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.183270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.183305 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.185445 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.197100 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.207363 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.219066 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.228315 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.243458 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.253809 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88cx\" (UniqueName: \"kubernetes.io/projected/2bace7a0-7349-45d1-a407-d64a31a0d41c-kube-api-access-v88cx\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.253966 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.254123 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.254283 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.262008 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.279948 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.286424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.286478 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.286492 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.286528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.286545 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.295492 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.308023 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:55Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.355117 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.355201 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88cx\" (UniqueName: \"kubernetes.io/projected/2bace7a0-7349-45d1-a407-d64a31a0d41c-kube-api-access-v88cx\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.355245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.355275 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.356288 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.357431 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.361272 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2bace7a0-7349-45d1-a407-d64a31a0d41c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.370470 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88cx\" (UniqueName: \"kubernetes.io/projected/2bace7a0-7349-45d1-a407-d64a31a0d41c-kube-api-access-v88cx\") pod \"ovnkube-control-plane-749d76644c-p7vw6\" (UID: \"2bace7a0-7349-45d1-a407-d64a31a0d41c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.388661 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.388717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.388727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.388746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.388757 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.491786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.491823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.491834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.491850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.491862 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.557240 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.594465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.594502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.594513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.594528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.594538 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: W0121 06:54:55.599823 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bace7a0_7349_45d1_a407_d64a31a0d41c.slice/crio-9b697c255405fc5b76bb52fe0ec78ea2069c1305fe25050487f11bffc9a85599 WatchSource:0}: Error finding container 9b697c255405fc5b76bb52fe0ec78ea2069c1305fe25050487f11bffc9a85599: Status 404 returned error can't find the container with id 9b697c255405fc5b76bb52fe0ec78ea2069c1305fe25050487f11bffc9a85599 Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.635301 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:07:17.196658292 +0000 UTC Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.696995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.697030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.697042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.697056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.697068 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.800285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.800349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.800366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.800413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.800435 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.902842 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.902888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.902901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.902919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:55 crc kubenswrapper[4893]: I0121 06:54:55.902932 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:55Z","lastTransitionTime":"2026-01-21T06:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.005795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.005840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.005851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.005867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.005881 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.109115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.109165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.109179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.109197 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.109209 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.146703 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" event={"ID":"2bace7a0-7349-45d1-a407-d64a31a0d41c","Type":"ContainerStarted","Data":"9b697c255405fc5b76bb52fe0ec78ea2069c1305fe25050487f11bffc9a85599"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.212001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.212063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.212085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.212112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.212129 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.263418 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.263563 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263698 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.263630862 +0000 UTC m=+53.493976804 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263757 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.263771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263855 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.263831497 +0000 UTC m=+53.494177439 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263894 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263918 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.263940 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.264022 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.264089 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264149 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264164 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.264143156 +0000 UTC m=+53.494489098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264271 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.264245579 +0000 UTC m=+53.494591521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264272 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264300 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264335 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.264617 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.264379943 +0000 UTC m=+53.494725995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.316849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.316924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.316936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.316963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.316999 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.420210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.420251 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.420263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.420281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.420293 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.523415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.523450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.523460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.523475 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.523486 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.580829 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.580949 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.580982 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.581040 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.581146 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:56 crc kubenswrapper[4893]: E0121 06:54:56.581242 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.626215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.626296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.626316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.626345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.626365 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.636435 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:20:08.117691628 +0000 UTC Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.728879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.728932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.728943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.728960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.728970 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.831857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.831925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.831940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.831969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.831990 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.935316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.935395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.935417 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.935451 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:56 crc kubenswrapper[4893]: I0121 06:54:56.935473 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:56Z","lastTransitionTime":"2026-01-21T06:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.038026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.038070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.038081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.038100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.038112 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.141601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.141689 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.141711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.141736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.141755 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.156544 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" event={"ID":"2bace7a0-7349-45d1-a407-d64a31a0d41c","Type":"ContainerStarted","Data":"ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.157062 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" event={"ID":"2bace7a0-7349-45d1-a407-d64a31a0d41c","Type":"ContainerStarted","Data":"f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.174752 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.187931 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.198192 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.216780 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.230935 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.245117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.245170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.245182 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.245203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.245215 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.284577 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.299973 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.317889 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.332981 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.348384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.348452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.348467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.348492 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.348507 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.417587 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.433570 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.451868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.451922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.451933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.451954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.451966 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.452054 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.469327 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.492665 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.508616 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.554762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.554806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.554816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.554830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.554839 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.637236 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:36:12.009517805 +0000 UTC Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.657629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.657704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.657714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.657732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.657742 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.761036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.761098 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.761113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.761141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.761156 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.808028 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rc5gb"] Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.866500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:57 crc kubenswrapper[4893]: E0121 06:54:57.866762 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.868732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.868812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.868835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.868867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.868880 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.883912 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.912788 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.935221 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.950376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.960801 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.971204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.971247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.971261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.971276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.971288 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:57Z","lastTransitionTime":"2026-01-21T06:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.974101 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.983251 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.983299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprb6\" (UniqueName: \"kubernetes.io/projected/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-kube-api-access-jprb6\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.985299 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:57 crc kubenswrapper[4893]: I0121 06:54:57.995221 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:57Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.022207 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.036459 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.047444 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.063843 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.073358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.073395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.073421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.073436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.073444 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.076503 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.084261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.084306 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jprb6\" (UniqueName: \"kubernetes.io/projected/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-kube-api-access-jprb6\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.084441 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.084514 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:58.584495321 +0000 UTC m=+39.814841283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.087268 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.099323 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.104848 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jprb6\" (UniqueName: \"kubernetes.io/projected/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-kube-api-access-jprb6\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.109112 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.161163 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/0.log" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.164130 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f" exitCode=1 Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.164181 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.165010 4893 scope.go:117] "RemoveContainer" containerID="54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.178060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.178097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.178105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.178143 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.178153 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.182697 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.199923 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.216640 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.230632 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.245813 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.258559 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.268494 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280611 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.280904 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.291311 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.316352 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"39 6118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.521560 6118 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:57.521575 6118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:57.521539 6118 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:57.522076 6118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:54:57.522106 6118 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:54:57.522122 6118 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:57.522126 6118 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:57.522133 6118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.522183 6118 factory.go:656] Stopping watch factory\\\\nI0121 06:54:57.522197 6118 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:54:57.522237 6118 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:57.522246 6118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:54:57.522252 6118 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:54:57.522258 6118 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.331490 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.344593 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.356842 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.380816 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.383014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.383046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.383055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.383069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.383078 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.393156 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.407527 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:58Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.486029 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.486060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.486068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.486083 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.486093 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.581777 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.581899 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.582218 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.582266 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.582300 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.582337 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.589924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.589976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.589990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.590009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.590023 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.591222 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.591395 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:58 crc kubenswrapper[4893]: E0121 06:54:58.591470 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:54:59.591452993 +0000 UTC m=+40.821798895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.637655 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:52:51.958848722 +0000 UTC Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.693385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.693441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.693453 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.693475 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.693488 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.797413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.797480 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.797498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.797513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.797534 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.901472 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.901511 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.901519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.901535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:58 crc kubenswrapper[4893]: I0121 06:54:58.901550 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:58Z","lastTransitionTime":"2026-01-21T06:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.003967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.004024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.004041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.004064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.004083 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.106497 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.106541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.106550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.106571 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.106584 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.426726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.426762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.426771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.426785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.426794 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.430942 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/0.log" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.434248 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.434996 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.457227 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.472733 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.488277 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.502746 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.516996 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.529291 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.529337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.529349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.529368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.529810 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.531586 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.542364 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.564732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"39 6118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.521560 6118 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:57.521575 6118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:57.521539 6118 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:57.522076 6118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:54:57.522106 6118 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:54:57.522122 6118 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:57.522126 6118 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:57.522133 6118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.522183 6118 factory.go:656] Stopping watch factory\\\\nI0121 06:54:57.522197 6118 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:54:57.522237 6118 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:57.522246 6118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:54:57.522252 6118 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:54:57.522258 6118 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.580508 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:59 crc kubenswrapper[4893]: E0121 06:54:59.580712 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.582474 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.595155 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.606856 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.620139 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.627824 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:54:59 crc kubenswrapper[4893]: E0121 06:54:59.627983 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:59 crc kubenswrapper[4893]: E0121 06:54:59.628056 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:01.628029537 +0000 UTC m=+42.858375439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.630928 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.632208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.632234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.632244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.632257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.632267 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.638777 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:52:38.795826896 +0000 UTC Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.642252 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.655651 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.668862 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.681607 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.693634 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.705188 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.717399 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.729527 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.734287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.734324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.734334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.734351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.734362 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.744832 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.757139 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.772413 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.785456 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.799456 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.814511 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.825805 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.835657 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.838196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.838220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.838228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.838241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.838251 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.854344 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"39 6118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.521560 6118 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:57.521575 6118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:57.521539 6118 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:57.522076 6118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:54:57.522106 6118 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:54:57.522122 6118 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:57.522126 6118 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:57.522133 6118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.522183 6118 factory.go:656] Stopping watch factory\\\\nI0121 06:54:57.522197 6118 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:54:57.522237 6118 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:57.522246 6118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:54:57.522252 6118 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:54:57.522258 6118 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.868834 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.881707 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:54:59Z is after 2025-08-24T17:21:41Z" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.941189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.941255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.941273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.941294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:54:59 crc kubenswrapper[4893]: I0121 06:54:59.941309 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:54:59Z","lastTransitionTime":"2026-01-21T06:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.044379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.044420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.044432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.044448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.044459 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.146853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.146900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.146909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.146925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.146937 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.249920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.249959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.249969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.249984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.249994 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.352526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.352590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.352608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.352632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.352649 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.441163 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/1.log" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.442131 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/0.log" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.445960 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81" exitCode=1 Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.446049 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.446207 4893 scope.go:117] "RemoveContainer" containerID="54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.447516 4893 scope.go:117] "RemoveContainer" containerID="915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81" Jan 21 06:55:00 crc kubenswrapper[4893]: E0121 06:55:00.447907 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.455055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.455099 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.455120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.455147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.455169 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.468811 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.483793 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.495753 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.508919 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.523046 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.533885 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.547975 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.558304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.558522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.558537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.558555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.558568 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.567909 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54a34267222e86c869cef2b23e0a1d5c65a3c2e03e0e76367a0e700a62fcb10f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"39 6118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.521560 6118 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:57.521575 6118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:57.521539 6118 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:57.522076 6118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:54:57.522106 6118 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:54:57.522122 6118 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:57.522126 6118 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:57.522133 6118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:57.522183 6118 factory.go:656] Stopping watch factory\\\\nI0121 06:54:57.522197 6118 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:54:57.522237 6118 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:57.522246 6118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:54:57.522252 6118 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:54:57.522258 6118 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.580305 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.580402 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.580413 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:00 crc kubenswrapper[4893]: E0121 06:55:00.580501 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:00 crc kubenswrapper[4893]: E0121 06:55:00.580641 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:00 crc kubenswrapper[4893]: E0121 06:55:00.580728 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.583605 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.597092 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.608077 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.624417 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.636402 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.639024 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:15:49.01134564 +0000 UTC Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.650720 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662272 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.662854 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.675166 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.764395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.764430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.764438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.764453 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.764462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.870014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.870080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.870096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.870116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.870126 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.974720 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.974751 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.974759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.974773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:00 crc kubenswrapper[4893]: I0121 06:55:00.974782 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:00Z","lastTransitionTime":"2026-01-21T06:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.077432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.077479 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.077496 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.077518 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.077536 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.180763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.180813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.180852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.180880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.180898 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.284363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.284401 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.284411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.284425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.284436 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.388011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.388049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.388059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.388076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.388087 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.451570 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/1.log" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.455049 4893 scope.go:117] "RemoveContainer" containerID="915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.455232 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.467537 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.486229 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.490566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.490643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.490660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.490709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.490727 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.496937 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.509624 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.519430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.519470 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.519481 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.519496 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.519505 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.528104 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.531424 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.534758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.534898 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.534984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.535094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.535196 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.541659 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.545647 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.549323 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.549365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.549378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.549394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.549408 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.554286 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.560751 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.564550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.564587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.564602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.564618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.564628 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.569490 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.577574 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.579866 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.579967 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.583754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.583796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.583807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.583830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.583841 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.585939 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.589174 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.600613 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.601250 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.601413 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.603269 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.603297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.603322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.603389 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.603400 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.613604 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.624234 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.636701 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.639388 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:17:34.336030668 +0000 UTC Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.647048 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.648411 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.648584 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:01 crc kubenswrapper[4893]: E0121 06:55:01.648636 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.648621106 +0000 UTC m=+46.878967008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.662923 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.675523 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.706316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.706359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.706391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.706411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.706424 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.809919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.809972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.809983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.810001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.810011 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.913818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.913877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.913890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.913916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:01 crc kubenswrapper[4893]: I0121 06:55:01.913928 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:01Z","lastTransitionTime":"2026-01-21T06:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.017226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.017306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.017320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.017339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.017375 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.120760 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.120831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.120862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.120880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.120890 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.224508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.224608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.224633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.224738 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.224766 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.328210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.328302 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.328320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.328342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.328352 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.430983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.431044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.431053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.431077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.431086 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.459886 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.465203 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.466408 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.487080 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.508002 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.528000 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.534860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.534910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.534921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.534967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.534979 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.544939 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.559976 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.572988 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.580129 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.580162 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:02 crc kubenswrapper[4893]: E0121 06:55:02.580248 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.580146 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:02 crc kubenswrapper[4893]: E0121 06:55:02.580398 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:02 crc kubenswrapper[4893]: E0121 06:55:02.580507 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.590019 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.609608 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.637716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.637784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.637795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.637818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.637838 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.639200 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.639513 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:28:35.737478537 +0000 UTC Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.664007 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.678958 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.695731 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.715769 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.730655 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.741110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.741185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.741199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.741221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.741237 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.742423 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.756822 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.844240 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.844324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.844337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.844353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.844364 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.948032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.948101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.948122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.948149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:02 crc kubenswrapper[4893]: I0121 06:55:02.948171 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:02Z","lastTransitionTime":"2026-01-21T06:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.051398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.051461 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.051484 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.051513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.051536 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.155057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.155097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.155106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.155125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.155136 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.258287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.258323 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.258331 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.258347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.258356 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.361233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.361280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.361295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.361313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.361327 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.463827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.463901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.463929 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.463968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.463994 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.566633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.566699 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.566711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.566730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.566742 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.590608 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:03 crc kubenswrapper[4893]: E0121 06:55:03.591008 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.640304 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:20:13.662711313 +0000 UTC Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.670299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.670349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.670360 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.670377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.670403 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.773071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.773147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.773171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.773199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.773216 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.876837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.877150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.877246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.877327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.877400 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.981303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.981441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.981458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.981485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:03 crc kubenswrapper[4893]: I0121 06:55:03.981502 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:03Z","lastTransitionTime":"2026-01-21T06:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.084444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.084514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.084530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.084556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.084575 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.187282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.187357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.187375 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.187398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.187623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.291987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.292047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.292064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.292097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.292112 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.395894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.395950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.395961 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.395976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.395986 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.499033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.499097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.499108 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.499129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.499143 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.580458 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.580529 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.580534 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:04 crc kubenswrapper[4893]: E0121 06:55:04.580627 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:04 crc kubenswrapper[4893]: E0121 06:55:04.581016 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:04 crc kubenswrapper[4893]: E0121 06:55:04.581130 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.602591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.602633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.602642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.602657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.602699 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.640861 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:02:03.040383071 +0000 UTC Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.706413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.706476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.706493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.706516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.706533 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.810270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.810731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.810941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.811180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.811433 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.915432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.915507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.915529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.915558 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:04 crc kubenswrapper[4893]: I0121 06:55:04.915580 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:04Z","lastTransitionTime":"2026-01-21T06:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.018587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.018630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.018643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.018658 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.018687 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.120807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.120844 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.120853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.120867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.120876 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.223587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.223637 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.223650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.223694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.223708 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.326311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.326355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.326367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.326384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.326394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.428409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.428448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.428459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.428476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.428486 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.531770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.531847 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.531866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.531892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.531911 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.580662 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:05 crc kubenswrapper[4893]: E0121 06:55:05.581085 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.634737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.634861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.634885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.634910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.634928 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.641029 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:34:09.407283685 +0000 UTC Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.693001 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:05 crc kubenswrapper[4893]: E0121 06:55:05.693441 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:05 crc kubenswrapper[4893]: E0121 06:55:05.693717 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:13.693622186 +0000 UTC m=+54.923968158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.737650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.737748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.737772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.737801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.737822 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.841285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.841349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.841367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.841392 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.841409 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.945068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.945186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.945209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.945234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:05 crc kubenswrapper[4893]: I0121 06:55:05.945254 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:05Z","lastTransitionTime":"2026-01-21T06:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.048187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.048250 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.048277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.048310 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.048334 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.151268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.151344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.151367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.151397 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.151419 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.254508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.254559 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.254577 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.254601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.254622 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.356837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.356892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.356903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.356920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.356935 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.551165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.551225 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.551235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.551249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.551258 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.580704 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.580738 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.580833 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:06 crc kubenswrapper[4893]: E0121 06:55:06.580976 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:06 crc kubenswrapper[4893]: E0121 06:55:06.581085 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:06 crc kubenswrapper[4893]: E0121 06:55:06.581170 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.641847 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:55:26.192006056 +0000 UTC Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.654978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.655019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.655028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.655046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.655055 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.758024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.758070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.758082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.758380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.758413 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.861592 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.861664 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.861765 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.861796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.861820 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.965513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.965558 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.965566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.965583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:06 crc kubenswrapper[4893]: I0121 06:55:06.965597 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:06Z","lastTransitionTime":"2026-01-21T06:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.068461 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.068537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.068560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.068590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.068634 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.171214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.171308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.171325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.171357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.171375 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.274446 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.274517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.274527 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.274617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.274631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.378549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.378623 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.378647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.378724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.378755 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.481597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.481639 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.481690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.481710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.481736 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.580556 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:07 crc kubenswrapper[4893]: E0121 06:55:07.581117 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.584377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.584433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.584476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.584510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.584533 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.642441 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:01:22.643258778 +0000 UTC Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.687250 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.687332 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.687359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.687390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.687425 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.790646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.790703 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.790715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.790733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.790744 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.988855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.988912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.988927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.988949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:07 crc kubenswrapper[4893]: I0121 06:55:07.988965 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:07Z","lastTransitionTime":"2026-01-21T06:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.091805 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.091848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.091860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.091877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.091886 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.195243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.195309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.195321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.195340 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.195352 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.298625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.298700 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.298710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.298726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.298735 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.401803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.401861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.401875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.401896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.401908 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.504837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.504892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.504912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.504934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.504949 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.580644 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.580711 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.580711 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:08 crc kubenswrapper[4893]: E0121 06:55:08.580807 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:08 crc kubenswrapper[4893]: E0121 06:55:08.580916 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:08 crc kubenswrapper[4893]: E0121 06:55:08.581045 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.607199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.607238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.607247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.607276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.607284 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.643780 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:31:51.351585665 +0000 UTC Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.710122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.710175 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.710184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.710198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.710208 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.812990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.813043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.813056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.813075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:08 crc kubenswrapper[4893]: I0121 06:55:08.813087 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:08Z","lastTransitionTime":"2026-01-21T06:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.016353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.016391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.016403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.016418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.016429 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.118864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.118910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.118928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.118952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.118971 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.220921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.220990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.221007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.221031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.221048 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.326231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.326288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.326300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.326318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.326329 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.428816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.428898 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.428924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.428952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.428974 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.532328 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.532394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.532411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.532437 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.532456 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.579959 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:09 crc kubenswrapper[4893]: E0121 06:55:09.580392 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.598091 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.614531 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.631234 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.635440 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.635483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.635523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.635541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.635553 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.643896 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:52:31.220339691 +0000 UTC Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.646612 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.663684 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.677389 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.690345 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.702315 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.715506 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.738206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.738282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.738308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.738346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.738371 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.740317 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.756970 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.769017 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.782800 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.800927 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.817644 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.837730 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:09Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.841193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.841275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.841303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.841336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.841362 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.944806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.944854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.944866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.944885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:09 crc kubenswrapper[4893]: I0121 06:55:09.944901 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:09Z","lastTransitionTime":"2026-01-21T06:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.047865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.047900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.047910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.047925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.047936 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.151119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.151181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.151199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.151223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.151239 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.253341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.253581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.253728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.253814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.253874 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.356426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.356823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.356916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.356993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.357058 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.459525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.459600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.459640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.459719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.459745 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.720910 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:17:41.520967065 +0000 UTC Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.721118 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.721156 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:10 crc kubenswrapper[4893]: E0121 06:55:10.721262 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.721433 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:10 crc kubenswrapper[4893]: E0121 06:55:10.721531 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:10 crc kubenswrapper[4893]: E0121 06:55:10.721600 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.725136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.725175 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.725193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.725211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.725222 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.828031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.828067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.828078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.828094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.828104 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.931285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.931338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.931351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.931370 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:10 crc kubenswrapper[4893]: I0121 06:55:10.931381 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:10Z","lastTransitionTime":"2026-01-21T06:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.034456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.034547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.034577 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.034608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.034625 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.138697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.139047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.139223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.139380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.139513 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.241987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.242022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.242031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.242072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.242081 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.344790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.344849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.344863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.344885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.344898 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.447841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.448173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.448254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.448338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.448416 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.554155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.554465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.554575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.554699 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.554814 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.580839 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:11 crc kubenswrapper[4893]: E0121 06:55:11.581210 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.657605 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.657665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.657706 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.657726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.657757 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.721664 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:20:18.786671384 +0000 UTC Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.760066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.760101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.760112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.760139 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.760150 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.862457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.862532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.862549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.862570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.862585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.941497 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.941535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.941545 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.941562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.941585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: E0121 06:55:11.959307 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.963227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.963275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.963289 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.963305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.963317 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:11 crc kubenswrapper[4893]: E0121 06:55:11.981243 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:11Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.985216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.985250 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.985261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.985277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:11 crc kubenswrapper[4893]: I0121 06:55:11.985287 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:11Z","lastTransitionTime":"2026-01-21T06:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.005427 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.010859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.010903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.010918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.010937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.010948 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.029701 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.034575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.034635 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.034651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.034688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.034704 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.053050 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.053171 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.055153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.055200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.055209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.055222 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.055231 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.157934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.157978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.157989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.158009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.158021 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.260734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.260788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.260806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.260832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.260849 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.360285 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.360403 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.360444 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.360516 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.360561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360709 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:55:44.360647544 +0000 UTC m=+85.590993466 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360817 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360923 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360934 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360973 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:44.360939572 +0000 UTC m=+85.591285514 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361016 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361040 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361052 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361025 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:44.361006724 +0000 UTC m=+85.591352696 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.360981 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361138 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361178 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:44.361102067 +0000 UTC m=+85.591447969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.361212 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:44.3612035 +0000 UTC m=+85.591549402 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.362951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.362989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.363000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.363017 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.363028 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.465950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.465989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.465997 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.466015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.466025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.569007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.569068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.569082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.569107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.569125 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.580337 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.580340 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.580492 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.580580 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.580464 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:12 crc kubenswrapper[4893]: E0121 06:55:12.580706 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.671393 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.671431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.671439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.671451 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.671461 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.722622 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:44:58.965479648 +0000 UTC Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.774947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.775007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.775020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.775039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.775051 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.965710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.965753 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.965761 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.965809 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:12 crc kubenswrapper[4893]: I0121 06:55:12.965819 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:12Z","lastTransitionTime":"2026-01-21T06:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.019908 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.035441 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.035880 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.052732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.068398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.068473 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.068500 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.068528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.068546 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.070662 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.086049 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.097803 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.111052 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.122135 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.131602 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.141098 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.156533 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.172543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.172580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.172589 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.172604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.172613 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.178644 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.193816 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.205523 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.223444 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.248098 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.263209 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:13Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.275213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.275255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.275267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.275286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.275298 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.377525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.377565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.377576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.377592 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.377603 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.481081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.481138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.481155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.481178 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.481196 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.580880 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:13 crc kubenswrapper[4893]: E0121 06:55:13.581141 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.582321 4893 scope.go:117] "RemoveContainer" containerID="915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.584903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.584956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.585011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.585090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.585109 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.817628 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:11:44.966805022 +0000 UTC Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.819212 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:13 crc kubenswrapper[4893]: E0121 06:55:13.819661 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:13 crc kubenswrapper[4893]: E0121 06:55:13.819767 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:29.819745607 +0000 UTC m=+71.050091569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.821188 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.821273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.821287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.821306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.821319 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.926794 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.927089 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.927183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.927265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:13 crc kubenswrapper[4893]: I0121 06:55:13.927373 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:13Z","lastTransitionTime":"2026-01-21T06:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.030133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.030177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.030191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.030210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.030221 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.132554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.132597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.132609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.132625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.132637 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.235727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.235773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.235785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.235803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.235815 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.338831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.338895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.338905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.338926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.338943 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.441482 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.441510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.441517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.441531 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.441541 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.543586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.543620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.543628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.543642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.543650 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.580573 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.580580 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:14 crc kubenswrapper[4893]: E0121 06:55:14.580802 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.580599 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:14 crc kubenswrapper[4893]: E0121 06:55:14.580950 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:14 crc kubenswrapper[4893]: E0121 06:55:14.581093 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.646190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.646231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.646243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.646260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.646271 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.748654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.748732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.748746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.748769 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.748785 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.818262 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:39:36.035834018 +0000 UTC Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.826945 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/1.log" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.829602 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.830005 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.842801 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.851361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.851387 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.851395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.851408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.851417 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.860698 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.876109 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.890887 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.904456 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.915394 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.934147 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.948732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.953280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.953314 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.953322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.953338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.953347 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:14Z","lastTransitionTime":"2026-01-21T06:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.961119 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:14 crc kubenswrapper[4893]: I0121 06:55:14.972652 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.029620 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:14Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.041131 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.054781 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.055480 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.055510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.055519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.055533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.055543 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.070047 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.081269 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.092005 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.103640 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.157796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.157836 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.157849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.157868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.157880 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.261131 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.261215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.261244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.261275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.261300 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.364040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.364090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.364109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.364125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.364135 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.466157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.466199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.466210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.466226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.466238 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.569580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.569628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.569643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.569663 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.569707 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.580590 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:15 crc kubenswrapper[4893]: E0121 06:55:15.580842 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.672854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.672914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.672933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.672959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.672977 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.775414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.775499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.775553 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.775604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.775631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.818404 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:16:44.246456718 +0000 UTC Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.835221 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/2.log" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.836175 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/1.log" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.840232 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" exitCode=1 Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.840291 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.840340 4893 scope.go:117] "RemoveContainer" containerID="915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.841515 4893 scope.go:117] "RemoveContainer" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" Jan 21 06:55:15 crc kubenswrapper[4893]: E0121 06:55:15.842142 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.856908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.873227 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.877968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.878010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.878041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.878062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.878077 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.892585 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.910043 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.924738 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.941510 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.955211 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.968729 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.980846 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.982013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.982161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.982280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.982391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:15 crc kubenswrapper[4893]: I0121 06:55:15.982495 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:15Z","lastTransitionTime":"2026-01-21T06:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.008952 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915396f894c4438f6ba5e4550e4eac3083558abc334125753f1b9e7d18080e81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:54:59Z\\\",\\\"message\\\":\\\"or *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557008 6348 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 06:54:59.557303 6348 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:54:59.557696 6348 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:54:59.557709 6348 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:54:59.557721 6348 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:54:59.557726 6348 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:54:59.557744 6348 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:54:59.557788 6348 factory.go:656] Stopping watch factory\\\\nI0121 06:54:59.557809 6348 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 06:54:59.557816 6348 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:54:59.557823 6348 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:54:59.557828 6348 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:54:59.557834 6348 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.023484 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.037063 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.048654 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.064880 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.080877 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.085527 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.085583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.085598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.085629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.085644 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.098271 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.119097 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.189886 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.189992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.190012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.190070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.190089 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.293534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.293606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.293619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.293638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.293680 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.397231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.397293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.397305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.397325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.397338 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.500621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.500662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.500691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.500709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.500720 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.580405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.580469 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.580506 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:16 crc kubenswrapper[4893]: E0121 06:55:16.580600 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:16 crc kubenswrapper[4893]: E0121 06:55:16.580808 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:16 crc kubenswrapper[4893]: E0121 06:55:16.580951 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.604004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.604076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.604141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.604167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.604198 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.708349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.708422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.708442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.708467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.708485 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.811315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.811358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.811373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.811391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.811404 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.819595 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:57:55.65863884 +0000 UTC Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.846409 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/2.log" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.850586 4893 scope.go:117] "RemoveContainer" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" Jan 21 06:55:16 crc kubenswrapper[4893]: E0121 06:55:16.850752 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.868826 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.883243 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.903318 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.914076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.914117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.914128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.914146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.914157 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:16Z","lastTransitionTime":"2026-01-21T06:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.916050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.929073 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.943940 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.954052 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.964488 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:16 crc kubenswrapper[4893]: I0121 06:55:16.983104 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.002391 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.013594 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.016404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.016476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.016495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.016521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.016540 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.026302 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.043036 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.054486 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.069799 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.079114 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.090952 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.118957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.119214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.119293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.119388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.119469 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.223028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.223089 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.223116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.223144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.223165 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.326474 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.326544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.326564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.326592 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.326613 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.429939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.430014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.430051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.430086 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.430108 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.532307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.532365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.532381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.532399 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.532410 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.562922 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.580218 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:17 crc kubenswrapper[4893]: E0121 06:55:17.580610 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.582713 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.605569 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.621307 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.635008 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.635450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.635641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.635824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.635986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.636123 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.647445 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.659042 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.671570 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.682502 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.698032 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.709531 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.721432 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.734877 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.737964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.738010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.738025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.738042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.738056 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.745692 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.756648 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.774301 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.786054 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.795405 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.819781 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:40:31.980135032 +0000 UTC Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.840540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.840583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.840593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.840612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.840623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.942601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.942644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.942653 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.942687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:17 crc kubenswrapper[4893]: I0121 06:55:17.942696 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:17Z","lastTransitionTime":"2026-01-21T06:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.044812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.044864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.044880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.044900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.044915 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.147444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.147911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.148099 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.148286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.148486 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.251714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.251751 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.251762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.251776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.251785 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.354896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.355249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.355326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.355394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.355456 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.458504 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.458570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.458582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.458598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.458613 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.561368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.561409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.561420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.561436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.561447 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.580553 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.580632 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.580558 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:18 crc kubenswrapper[4893]: E0121 06:55:18.580698 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:18 crc kubenswrapper[4893]: E0121 06:55:18.580801 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:18 crc kubenswrapper[4893]: E0121 06:55:18.581005 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.663965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.664271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.664386 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.664557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.664695 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.767831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.767880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.767889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.767913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.767922 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.820717 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:54:19.057513154 +0000 UTC Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.870256 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.870315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.870336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.870359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.870375 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.973357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.973439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.973468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.973501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:18 crc kubenswrapper[4893]: I0121 06:55:18.973523 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:18Z","lastTransitionTime":"2026-01-21T06:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.088190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.088369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.088399 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.088442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.088466 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.192038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.192096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.192109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.192133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.192147 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.294992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.295041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.295052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.295072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.295086 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.397646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.397743 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.397753 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.397778 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.397789 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.499799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.499846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.499865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.499880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.499890 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.580303 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:19 crc kubenswrapper[4893]: E0121 06:55:19.580491 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602553 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.602536 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.617158 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.635290 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.661180 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.686146 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.701798 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.707237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.707302 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.707316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.707360 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.707375 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.714386 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.725903 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.740003 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.753471 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.770186 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.788062 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.806832 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.809539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.809615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.809629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.809649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.809662 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.821384 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:11:43.994259964 +0000 UTC Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.821600 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.837561 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.853606 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.867466 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:19Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.911992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.912018 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.912025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.912038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:19 crc kubenswrapper[4893]: I0121 06:55:19.912047 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:19Z","lastTransitionTime":"2026-01-21T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.014355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.014423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.014443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.014468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.014486 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.119490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.119619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.119652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.119719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.119755 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.222368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.222415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.222424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.222439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.222451 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.327065 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.327157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.327182 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.327208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.327229 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.430538 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.430580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.430591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.430612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.430624 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.540407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.540454 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.540512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.540533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.540545 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.580306 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.580420 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.580463 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:20 crc kubenswrapper[4893]: E0121 06:55:20.580661 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:20 crc kubenswrapper[4893]: E0121 06:55:20.580969 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:20 crc kubenswrapper[4893]: E0121 06:55:20.581038 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.644155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.644191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.644200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.644213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.644230 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.746801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.746895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.746966 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.747001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.747026 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.821854 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:33:51.19073956 +0000 UTC Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.850411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.850461 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.850478 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.850503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.850524 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.954292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.954346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.954358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.954380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:20 crc kubenswrapper[4893]: I0121 06:55:20.954394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:20Z","lastTransitionTime":"2026-01-21T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.057012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.057068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.057085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.057108 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.057128 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.160231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.160263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.160270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.160284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.160292 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.262745 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.262789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.262814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.262831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.262840 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.365268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.365641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.365749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.365926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.366037 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.468856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.468909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.468928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.468950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.468967 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.571805 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.571860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.571873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.571893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.571906 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.580228 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:21 crc kubenswrapper[4893]: E0121 06:55:21.580366 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.674944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.674991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.675006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.675025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.675035 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.776878 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.776917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.776926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.776969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.776981 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.822452 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:15:51.454366627 +0000 UTC Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.880384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.880433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.880443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.880456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.880465 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.982365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.982409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.982418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.982434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:21 crc kubenswrapper[4893]: I0121 06:55:21.982444 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:21Z","lastTransitionTime":"2026-01-21T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.060829 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.060868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.060877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.060891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.060900 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.079498 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.083356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.083387 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.083396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.083413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.083424 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.098969 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.102913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.102968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.102991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.103015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.103034 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.118029 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.121932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.121984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.121995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.122012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.122025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.188231 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.192649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.192706 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.192717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.192747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.192760 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.207308 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.207431 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.209251 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.209373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.209452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.209529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.209620 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.312656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.312727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.312736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.312750 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.312759 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.416097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.416175 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.416193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.416214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.416226 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.521407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.521446 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.521455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.521470 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.521479 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.580115 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.580115 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.580272 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.580152 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.580385 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:22 crc kubenswrapper[4893]: E0121 06:55:22.580579 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.624408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.624460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.624468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.624484 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.624494 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.727561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.727639 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.727657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.727720 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.727743 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.823117 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:53:16.746262128 +0000 UTC Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.831652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.831758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.831777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.831803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.831819 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.934525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.934591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.934611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.934639 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:22 crc kubenswrapper[4893]: I0121 06:55:22.934656 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:22Z","lastTransitionTime":"2026-01-21T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.037533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.037610 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.037634 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.037665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.037733 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.140432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.140487 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.140507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.140532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.140553 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.243053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.243435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.243447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.243465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.243477 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.350689 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.350724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.350733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.350747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.350755 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.528446 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.528493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.528505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.528525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.528540 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.580365 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:23 crc kubenswrapper[4893]: E0121 06:55:23.580835 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.631118 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.631172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.631183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.631201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.631216 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.733846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.733874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.733882 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.733894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.733902 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.824167 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:30:12.332232935 +0000 UTC Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.836179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.836220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.836230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.836248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.836259 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.938458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.938528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.938540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.938579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:23 crc kubenswrapper[4893]: I0121 06:55:23.938593 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:23Z","lastTransitionTime":"2026-01-21T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.041077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.041138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.041149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.041163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.041172 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.143141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.143184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.143193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.143246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.143259 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.245919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.245955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.245964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.245980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.245991 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.348644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.348723 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.348735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.348750 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.348760 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.451619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.451707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.451722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.451743 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.451774 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.554459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.554494 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.554502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.554515 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.554525 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.580301 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.580375 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.580312 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:24 crc kubenswrapper[4893]: E0121 06:55:24.580419 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:24 crc kubenswrapper[4893]: E0121 06:55:24.580551 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:24 crc kubenswrapper[4893]: E0121 06:55:24.580799 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.667964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.668003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.668014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.668032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.668045 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.769922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.769956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.769968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.769984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.769997 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.824765 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:31:01.40357879 +0000 UTC Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.872944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.873014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.873029 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.873050 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.873063 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.975322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.975378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.975395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.975419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:24 crc kubenswrapper[4893]: I0121 06:55:24.975435 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:24Z","lastTransitionTime":"2026-01-21T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.078156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.078219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.078253 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.078272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.078284 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.181070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.181112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.182004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.182119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.182168 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.284465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.284522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.284542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.284564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.284582 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.387224 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.387274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.387286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.387303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.387322 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.490008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.490062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.490076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.490097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.490109 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.580814 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:25 crc kubenswrapper[4893]: E0121 06:55:25.580956 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.592251 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.592306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.592321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.592339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.592351 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.695172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.695211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.695222 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.695238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.695250 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.798024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.798067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.798078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.798094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.798105 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.825533 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:52:39.87972769 +0000 UTC Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.900893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.900949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.901003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.901035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:25 crc kubenswrapper[4893]: I0121 06:55:25.901054 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:25Z","lastTransitionTime":"2026-01-21T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.002922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.002979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.003015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.003032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.003044 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.105441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.105482 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.105494 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.105513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.105532 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.208276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.208330 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.208346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.208370 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.208386 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.310901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.310938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.310946 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.310959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.310981 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.413864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.413924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.413938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.413955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.413966 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.515923 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.515994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.516014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.516044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.516067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.580862 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.580902 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.580982 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:26 crc kubenswrapper[4893]: E0121 06:55:26.581025 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:26 crc kubenswrapper[4893]: E0121 06:55:26.581177 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:26 crc kubenswrapper[4893]: E0121 06:55:26.581250 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.618548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.618597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.618612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.618632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.618645 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.720964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.720992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.721000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.721012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.721022 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.823986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.824047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.824066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.824090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.824108 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.826319 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:38:43.018899455 +0000 UTC Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.926545 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.926599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.926609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.926622 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:26 crc kubenswrapper[4893]: I0121 06:55:26.926631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:26Z","lastTransitionTime":"2026-01-21T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.029963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.030025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.030036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.030052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.030061 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.131816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.131864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.131874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.131890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.131907 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.234346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.234408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.234418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.234434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.234443 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.336843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.336887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.336896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.336912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.336924 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.438925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.438966 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.438978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.438994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.439007 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.541557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.541628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.541647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.541695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.541715 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.580033 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:27 crc kubenswrapper[4893]: E0121 06:55:27.580183 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.644461 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.644497 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.644505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.644519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.644527 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.746870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.746910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.746923 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.746940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.746951 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.827002 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:01:10.416140013 +0000 UTC Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.848993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.849033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.849042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.849058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.849067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.951693 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.951748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.951767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.951789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:27 crc kubenswrapper[4893]: I0121 06:55:27.951804 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:27Z","lastTransitionTime":"2026-01-21T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.054564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.054617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.054629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.054646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.054686 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.221375 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.221412 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.221423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.221439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.221450 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.323268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.323304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.323312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.323326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.323335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.426496 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.426547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.426555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.426570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.426580 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.529657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.529756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.529771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.529789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.529801 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.580259 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.580394 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:28 crc kubenswrapper[4893]: E0121 06:55:28.580395 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.580464 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:28 crc kubenswrapper[4893]: E0121 06:55:28.580588 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:28 crc kubenswrapper[4893]: E0121 06:55:28.580604 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.599560 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.632103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.632165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.632176 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.632197 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.632209 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.734976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.735077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.735101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.735123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.735134 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.827839 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:21:03.433909444 +0000 UTC Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.837982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.838009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.838020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.838035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.838047 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.940498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.940541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.940564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.940583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:28 crc kubenswrapper[4893]: I0121 06:55:28.940596 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:28Z","lastTransitionTime":"2026-01-21T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.043322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.043361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.043370 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.043384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.043395 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.146121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.146188 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.146201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.146217 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.146226 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.284770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.284823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.284837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.284855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.284868 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.386890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.386924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.386932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.386948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.386958 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.489261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.489296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.489305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.489318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.489328 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.580267 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:29 crc kubenswrapper[4893]: E0121 06:55:29.580655 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.580939 4893 scope.go:117] "RemoveContainer" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" Jan 21 06:55:29 crc kubenswrapper[4893]: E0121 06:55:29.581136 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.591660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.591719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.591731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.591748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.591759 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.593958 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.605274 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.614969 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.628371 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.642450 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.653034 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.665422 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.685260 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.696903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.696953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.696965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.696984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.696996 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.699094 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.712270 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.723111 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.732591 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.748171 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.764170 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.775094 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.787172 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.799830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.799944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.799961 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.799980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.799991 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.801922 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.818241 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.828373 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:17:28.662237838 +0000 UTC Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.836066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:29 crc kubenswrapper[4893]: E0121 06:55:29.836216 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:29 crc kubenswrapper[4893]: E0121 06:55:29.836301 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:01.83628107 +0000 UTC m=+103.066626972 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.902740 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.902783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.902802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.902832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:29 crc kubenswrapper[4893]: I0121 06:55:29.902846 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:29Z","lastTransitionTime":"2026-01-21T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.005738 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.005791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.005804 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.005825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.005837 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.109549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.109579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.109587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.109600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.109609 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.212154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.212199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.212208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.212224 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.212237 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.315422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.315468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.315479 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.315527 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.315540 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.418490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.418590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.418609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.418634 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.418655 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.522107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.522157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.522168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.522187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.522199 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.581062 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.581071 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:30 crc kubenswrapper[4893]: E0121 06:55:30.581279 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.581088 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:30 crc kubenswrapper[4893]: E0121 06:55:30.581338 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:30 crc kubenswrapper[4893]: E0121 06:55:30.581477 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.624481 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.624524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.624534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.624547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.624557 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.727112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.727152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.727164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.727180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.727195 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.828524 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:29:29.679899828 +0000 UTC Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.830180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.830229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.830246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.830269 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.830284 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.932340 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.932380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.932390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.932406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:30 crc kubenswrapper[4893]: I0121 06:55:30.932415 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:30Z","lastTransitionTime":"2026-01-21T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.035431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.035476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.035490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.035506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.035519 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.137422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.137471 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.137483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.137502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.137514 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.239899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.239939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.239947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.239962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.239970 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.342637 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.342707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.342725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.342744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.342759 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.445294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.445335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.445346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.445363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.445374 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.547973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.548019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.548032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.548049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.548061 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.580852 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:31 crc kubenswrapper[4893]: E0121 06:55:31.581064 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.650147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.650227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.650240 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.650260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.650272 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.752327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.752372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.752382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.752399 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.752410 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.829176 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:23:35.901513778 +0000 UTC Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.855418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.855501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.855525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.855970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.856034 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.959499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.959556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.959578 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.959601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:31 crc kubenswrapper[4893]: I0121 06:55:31.959619 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:31Z","lastTransitionTime":"2026-01-21T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.062970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.063012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.063021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.063039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.063049 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.165307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.165351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.165359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.165373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.165383 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.219875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.219944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.219958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.219974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.220001 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.237157 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.241977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.242227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.242330 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.242421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.242728 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.257437 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.260953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.261023 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.261037 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.261055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.261066 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.272571 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.276334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.276378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.276387 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.276402 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.276411 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.287735 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.291191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.291313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.291387 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.291462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.291525 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.302977 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.303406 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.304888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.304941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.304954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.305157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.305178 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.408040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.408116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.408128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.408144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.408174 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.511351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.511426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.511443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.511466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.511489 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.580439 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.580457 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.580544 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.580779 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.581180 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:32 crc kubenswrapper[4893]: E0121 06:55:32.581562 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.614007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.614043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.614058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.614077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.614092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.717081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.717126 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.717142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.717163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.717179 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.819526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.819564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.819576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.819591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.819601 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.830070 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:37:46.922848998 +0000 UTC Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.907868 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/0.log" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.907923 4893 generic.go:334] "Generic (PLEG): container finished" podID="ecb64775-90e7-43a2-a5a8-4d73e348dcc4" containerID="1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda" exitCode=1 Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.907961 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerDied","Data":"1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.908345 4893 scope.go:117] "RemoveContainer" containerID="1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.921962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.921997 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.922005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.922019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.922029 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:32Z","lastTransitionTime":"2026-01-21T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.922530 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.935727 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.955272 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.976149 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:32 crc kubenswrapper[4893]: I0121 06:55:32.996229 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.013232 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.024490 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.024548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.024565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.024588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.024603 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.029237 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.051005 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.074615 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.091561 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.112315 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132749 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.132406 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.146908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.162523 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.238902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.238937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.238945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.238960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.238969 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.252984 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.275144 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.289874 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.306317 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.343228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.343279 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.343292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.343312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.343326 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.445631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.445736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.445759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.445786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.445804 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.549000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.549049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.549060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.549078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.549092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.581703 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:33 crc kubenswrapper[4893]: E0121 06:55:33.581851 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.652168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.652215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.652228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.652246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.652260 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.754936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.754977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.754988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.755006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.755018 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.830238 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:59:16.810233171 +0000 UTC Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.858115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.858168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.858183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.858205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.858220 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.912877 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/0.log" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.912945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerStarted","Data":"11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.931120 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.942376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.952312 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.961598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.961783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.961902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.962021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.962126 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:33Z","lastTransitionTime":"2026-01-21T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.965131 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.977569 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:33 crc kubenswrapper[4893]: I0121 06:55:33.988274 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.001773 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.014344 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.022827 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.031947 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.049969 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.060691 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.064707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.064870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.064964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.065054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.065134 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.077091 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.087959 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.098935 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.110398 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.125550 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.138130 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.167204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.167254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.167289 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.167312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.167324 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.270125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.270186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.270203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.270230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.270247 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.372848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.372880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.372891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.372909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.372921 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.476084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.476143 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.476158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.476179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.476194 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.579073 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.579115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.579126 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.579141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.579154 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.580485 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.580485 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.580594 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:34 crc kubenswrapper[4893]: E0121 06:55:34.580709 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:34 crc kubenswrapper[4893]: E0121 06:55:34.580809 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:34 crc kubenswrapper[4893]: E0121 06:55:34.580874 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.680588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.680640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.680648 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.680660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.680680 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.783080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.783123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.783132 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.783148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.783158 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.830712 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:55:26.851562664 +0000 UTC Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.887011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.887362 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.887596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.887806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.887969 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.990246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.990300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.990312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.990327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:34 crc kubenswrapper[4893]: I0121 06:55:34.990336 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:34Z","lastTransitionTime":"2026-01-21T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.092990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.093034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.093047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.093065 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.093077 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.196004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.196046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.196057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.196072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.196085 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.299863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.300300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.300488 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.300713 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.300976 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.403840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.403901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.403919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.403945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.403964 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.506409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.506730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.506832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.506930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.507094 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.580486 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:35 crc kubenswrapper[4893]: E0121 06:55:35.581132 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.610829 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.610952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.610970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.610997 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.611016 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.714424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.714483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.714498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.714517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.714529 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.817504 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.817555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.817564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.817579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.817587 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.831784 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:55:49.586501979 +0000 UTC Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.920764 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.920845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.920870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.920900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:35 crc kubenswrapper[4893]: I0121 06:55:35.920922 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:35Z","lastTransitionTime":"2026-01-21T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.036486 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.036519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.036530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.036571 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.036585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.139552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.139926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.140011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.140114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.140195 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.243838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.243928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.243952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.243982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.244001 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.346646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.346767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.346792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.346833 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.346855 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.449593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.449633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.449644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.449659 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.449688 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.552591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.552652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.552709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.552734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.552748 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.580198 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.580251 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.580336 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:36 crc kubenswrapper[4893]: E0121 06:55:36.580389 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:36 crc kubenswrapper[4893]: E0121 06:55:36.580505 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:36 crc kubenswrapper[4893]: E0121 06:55:36.580621 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.660853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.660924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.660943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.661434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.661458 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.764630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.764696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.764708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.764725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.764737 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.832371 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:21:49.161803649 +0000 UTC Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.868788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.868853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.868865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.868884 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.868896 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.974466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.974519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.974530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.974555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:36 crc kubenswrapper[4893]: I0121 06:55:36.974567 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:36Z","lastTransitionTime":"2026-01-21T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.076666 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.076734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.076744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.076759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.076773 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.179424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.179468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.179477 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.179491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.179500 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.282032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.282085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.282095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.282109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.282119 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.385004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.385048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.385061 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.385079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.385092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.487638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.487694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.487704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.487718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.487727 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.580602 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:37 crc kubenswrapper[4893]: E0121 06:55:37.580778 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.589939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.590001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.590022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.590048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.590066 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.692021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.692085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.692093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.692109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.692118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.794862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.794913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.794930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.794952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.794969 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.833020 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:43:00.834708169 +0000 UTC Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.897113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.897141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.897149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.897162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:37 crc kubenswrapper[4893]: I0121 06:55:37.897170 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:37Z","lastTransitionTime":"2026-01-21T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.001772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.001819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.001829 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.001845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.001862 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.105572 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.105716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.105744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.105774 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.105797 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.208928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.208960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.208971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.208986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.208997 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.312485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.312539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.312551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.312568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.312583 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.416330 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.416409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.416434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.416462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.416487 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.519005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.519070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.519087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.519113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.519133 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.580289 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.580358 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:38 crc kubenswrapper[4893]: E0121 06:55:38.580430 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:38 crc kubenswrapper[4893]: E0121 06:55:38.580501 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.581046 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:38 crc kubenswrapper[4893]: E0121 06:55:38.581365 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.622117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.622158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.622174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.622190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.622199 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.725348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.725626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.725771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.725977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.726063 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.829863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.829930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.829953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.829980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.830000 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.834107 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:25:12.706772397 +0000 UTC Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.932199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.932265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.932283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.932308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:38 crc kubenswrapper[4893]: I0121 06:55:38.932325 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:38Z","lastTransitionTime":"2026-01-21T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.035938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.035994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.036004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.036020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.036030 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.139011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.139069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.139083 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.139103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.139115 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.242211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.242257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.242267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.242284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.242294 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.345105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.345173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.345192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.345218 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.345235 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.448398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.448476 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.448511 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.448541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.448569 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.550947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.551006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.551024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.551049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.551067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.581003 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:39 crc kubenswrapper[4893]: E0121 06:55:39.581298 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.599535 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.620558 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.641980 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.653360 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.653399 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.653411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.653435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.653455 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.664561 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.683240 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.719979 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.744797 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.756297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.756337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.756348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.756365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.756377 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.768046 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.783046 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.801462 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.834696 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:53:22.909464781 +0000 UTC Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.834767 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.850296 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.859604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.859656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.859688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.859710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.859723 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.863601 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.875913 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.893181 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.906533 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.919565 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.931972 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:39Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.971727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.971821 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.971840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.971876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:39 crc kubenswrapper[4893]: I0121 06:55:39.971896 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:39Z","lastTransitionTime":"2026-01-21T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.140824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.140876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.140885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.140908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.140918 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.244607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.244726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.244746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.244778 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.244795 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.348163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.348218 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.348232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.348252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.348264 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.465951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.466001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.466010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.466024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.466033 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.569640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.569751 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.569772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.569795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.569812 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.580171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:40 crc kubenswrapper[4893]: E0121 06:55:40.580291 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.580178 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:40 crc kubenswrapper[4893]: E0121 06:55:40.580357 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.580171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:40 crc kubenswrapper[4893]: E0121 06:55:40.580405 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.672433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.672501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.672516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.672535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.672547 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.775539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.775640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.775656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.775768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.775787 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.835667 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:11:16.151467421 +0000 UTC Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.879394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.879452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.879469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.879493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.879509 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.982129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.982209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.982233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.982257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:40 crc kubenswrapper[4893]: I0121 06:55:40.982275 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:40Z","lastTransitionTime":"2026-01-21T06:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.085045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.085087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.085097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.085114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.085126 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.188239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.188292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.188303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.188325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.188337 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.291168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.291226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.291247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.291274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.291296 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.394933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.394979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.394995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.395023 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.395054 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.498235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.498294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.498318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.498341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.498358 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.579980 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:41 crc kubenswrapper[4893]: E0121 06:55:41.580122 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.600131 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.600185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.600195 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.600214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.600227 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.703493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.703554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.703565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.703583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.703602 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.806236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.806272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.806283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.806297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.806308 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.836660 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:00:32.62700553 +0000 UTC Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.909191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.909223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.909230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.909263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:41 crc kubenswrapper[4893]: I0121 06:55:41.909276 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:41Z","lastTransitionTime":"2026-01-21T06:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.012891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.012946 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.012957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.012974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.012986 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.115925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.115974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.115990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.116008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.116022 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.219790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.219836 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.219848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.219865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.219876 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.322649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.322735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.322746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.322771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.322799 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.420547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.420590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.420602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.420620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.420630 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.432448 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.436020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.436075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.436092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.436113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.436129 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.448210 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.452247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.452277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.452285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.452298 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.452309 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.468834 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.473104 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.473205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.473230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.473262 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.473288 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.492922 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.497727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.497785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.497801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.497822 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.497838 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.511916 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.512107 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.514074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.514113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.514125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.514145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.514158 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.620296 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.620580 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.620364 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.620848 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.620937 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:42 crc kubenswrapper[4893]: E0121 06:55:42.620977 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.621176 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.621211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.621232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.621255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.621268 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.724205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.724265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.724277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.724297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.724310 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.827387 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.827438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.827449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.827467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.827481 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.838046 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:46:26.375018787 +0000 UTC Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.931318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.931378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.931396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.931421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:42 crc kubenswrapper[4893]: I0121 06:55:42.931439 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:42Z","lastTransitionTime":"2026-01-21T06:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.035526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.035932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.036048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.036158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.036268 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.140491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.140543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.140566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.140587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.140600 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.243075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.243120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.243131 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.243147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.243160 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.346852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.346924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.346937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.346963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.346976 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.449897 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.449965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.449982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.450005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.450025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.553984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.554071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.554096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.554129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.554152 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.583987 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:43 crc kubenswrapper[4893]: E0121 06:55:43.584292 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.657429 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.657505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.657524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.657554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.657574 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.761893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.761967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.761986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.762016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.762041 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.838609 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:14:21.401992441 +0000 UTC Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.864899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.865017 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.865028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.865048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.865059 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.968781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.968852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.968871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.968899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:43 crc kubenswrapper[4893]: I0121 06:55:43.968921 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:43Z","lastTransitionTime":"2026-01-21T06:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.072843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.072890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.072903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.072921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.072932 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.176873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.177024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.177082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.177110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.177179 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.279928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.279980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.279995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.280014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.280027 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.383785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.383848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.383860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.383876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.383886 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.442142 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.442312 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.442351 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442461 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:48.442420809 +0000 UTC m=+149.672766721 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.442572 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.442695 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442601 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442609 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442798 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:48.44278168 +0000 UTC m=+149.673127592 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442872 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442899 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442720 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442908 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442957 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:48.442940415 +0000 UTC m=+149.673286327 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442968 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.443012 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.442989 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:48.442978586 +0000 UTC m=+149.673324498 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.443124 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:48.44310339 +0000 UTC m=+149.673449352 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.487724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.487803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.487825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.487857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.487878 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.580191 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.580224 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.581342 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.582898 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.588608 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:44 crc kubenswrapper[4893]: E0121 06:55:44.589203 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.590944 4893 scope.go:117] "RemoveContainer" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.591311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.591369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.591388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.591411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.591428 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.694268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.694324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.694339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.694356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.694368 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.797877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.798200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.798216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.798283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.798302 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.839158 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:48:52.493118919 +0000 UTC Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.901770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.901846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.901873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.901905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:44 crc kubenswrapper[4893]: I0121 06:55:44.901928 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:44Z","lastTransitionTime":"2026-01-21T06:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.007157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.007257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.007280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.007317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.007346 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.111524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.111586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.111604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.111628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.111644 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.213790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.213834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.213845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.213861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.213875 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.316483 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.316555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.316576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.316608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.316631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.421191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.421246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.421263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.421285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.421300 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.523725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.523791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.523809 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.523835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.523855 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.580274 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:45 crc kubenswrapper[4893]: E0121 06:55:45.580538 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.626495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.626587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.626613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.626637 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.626655 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.730374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.730489 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.730834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.731153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.731215 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.834049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.834081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.834098 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.834116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.834128 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.857761 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:25:43.2886544 +0000 UTC Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.936473 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.936503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.936512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.936525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:45 crc kubenswrapper[4893]: I0121 06:55:45.936534 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:45Z","lastTransitionTime":"2026-01-21T06:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.142640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.142695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.142706 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.142723 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.142734 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.147008 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/2.log" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.150734 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.151799 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.261204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.261255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.261267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.261284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.261299 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.266282 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.300050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.318776 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.335793 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.349460 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.361123 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.364039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.364085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.364096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.364115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.364130 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.375193 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.386253 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.403115 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.419489 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.430088 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.464932 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.466772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.466823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.466842 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.466860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.466879 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.479916 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.490014 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.503530 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.521561 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.531730 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.543747 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.568666 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.568715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.568723 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.568738 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.568746 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.580132 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.580177 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:46 crc kubenswrapper[4893]: E0121 06:55:46.580239 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:46 crc kubenswrapper[4893]: E0121 06:55:46.580315 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.580509 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:46 crc kubenswrapper[4893]: E0121 06:55:46.580740 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.671663 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.671727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.671740 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.671756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.671765 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.774633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.774698 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.774711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.774729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.774741 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.858833 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:27:58.531076636 +0000 UTC Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.878112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.878186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.878209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.878239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.878262 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.981219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.981287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.981309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.981339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:46 crc kubenswrapper[4893]: I0121 06:55:46.981361 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:46Z","lastTransitionTime":"2026-01-21T06:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.084046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.084083 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.084091 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.084106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.084118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.159252 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/3.log" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.160472 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/2.log" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.163900 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" exitCode=1 Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.163942 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.164017 4893 scope.go:117] "RemoveContainer" containerID="e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.164712 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:55:47 crc kubenswrapper[4893]: E0121 06:55:47.164868 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186616 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186526 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.186771 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.210135 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.225113 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.307569 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.309245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.309307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.309318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.309334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.309345 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.322736 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.334829 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.346334 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.354817 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.364796 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.379981 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e14703fb64cc13f6d04b021d4d9de4505e58912bc747450c747dcb5cb53431ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:15Z\\\",\\\"message\\\":\\\"b00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.977977 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.978018 6525 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:14.975782 6525 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-wlrc6 after 0 failed attempt(s)\\\\nI0121 06:55:14.984317 6525 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-wlrc6\\\\nI0121 06:55:14.984373 6525 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.984399 6525 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:14.9757\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:46Z\\\",\\\"message\\\":\\\"06:55:46.717707 6910 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:46.717731 6910 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717738 6910 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717744 6910 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-m8k4g in node crc\\\\nI0121 06:55:46.717746 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 06:55:46.717750 6910 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-m8k4g after 0 failed attempt(s)\\\\nI0121 06:55:46.717758 6910 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-m8k4g\\\\nF0121 06:55:46.717805 6910 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.390927 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.401024 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.410897 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.411943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.411998 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.412010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.412026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.412039 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.422703 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.434741 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.444749 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.454012 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.465469 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.515100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.515139 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.515147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.515161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.515170 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.580266 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:47 crc kubenswrapper[4893]: E0121 06:55:47.580409 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.617357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.617394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.617403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.617418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.617426 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.720943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.721027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.721052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.721082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.721106 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.824307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.824445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.824460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.824479 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.824492 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.859847 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:55:30.851521851 +0000 UTC Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.927092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.927169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.927192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.927228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:47 crc kubenswrapper[4893]: I0121 06:55:47.927254 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:47Z","lastTransitionTime":"2026-01-21T06:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.030908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.030974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.030994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.031019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.031035 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.133956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.134017 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.134032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.134053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.134078 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.171168 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/3.log" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.176325 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:55:48 crc kubenswrapper[4893]: E0121 06:55:48.176515 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.202216 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.222582 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237089 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237102 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.237199 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.251267 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.268261 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.294233 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:46Z\\\",\\\"message\\\":\\\"06:55:46.717707 6910 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:46.717731 6910 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717738 6910 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717744 6910 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-m8k4g in node crc\\\\nI0121 06:55:46.717746 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 06:55:46.717750 6910 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-m8k4g after 0 failed attempt(s)\\\\nI0121 06:55:46.717758 6910 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-m8k4g\\\\nF0121 06:55:46.717805 6910 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.319309 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.338815 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.340169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.340202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.340214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.340230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.340243 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.350656 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.385187 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.399116 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.412600 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.426665 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.439149 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.442864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.442906 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.442914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.442928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.442940 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.455055 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.468640 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.484517 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.495345 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:48Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.545073 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.545109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.545118 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.545133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.545143 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.580635 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.580712 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.581102 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:48 crc kubenswrapper[4893]: E0121 06:55:48.581260 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:48 crc kubenswrapper[4893]: E0121 06:55:48.581349 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:48 crc kubenswrapper[4893]: E0121 06:55:48.581437 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.647494 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.647535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.647548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.647565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.647576 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.751201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.751261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.751279 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.751303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.751321 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.854903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.854979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.855006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.855043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.855066 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.860066 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:04:29.524517583 +0000 UTC Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.958100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.958135 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.958148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.958166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:48 crc kubenswrapper[4893]: I0121 06:55:48.958176 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:48Z","lastTransitionTime":"2026-01-21T06:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.060444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.060485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.060493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.060507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.060516 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.164380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.164423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.164434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.164455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.164470 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.268655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.268743 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.268756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.268777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.268791 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.372628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.372715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.372735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.372759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.372778 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.476238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.476282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.476293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.476311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.476326 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.579267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.579311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.579322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.579338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.579350 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.581119 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:49 crc kubenswrapper[4893]: E0121 06:55:49.581305 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.602266 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb5dc99ccba68df748aa327298285fec6936c75a3327906d9c789bf75c04815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwcm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hg78p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.627726 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6719fb30-da06-4964-b730-09e444618d94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:46Z\\\",\\\"message\\\":\\\"06:55:46.717707 6910 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:55:46.717731 6910 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717738 6910 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-m8k4g\\\\nI0121 06:55:46.717744 6910 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-m8k4g in node crc\\\\nI0121 06:55:46.717746 6910 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 06:55:46.717750 6910 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-m8k4g after 0 failed attempt(s)\\\\nI0121 06:55:46.717758 6910 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-m8k4g\\\\nF0121 06:55:46.717805 6910 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:55:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxcrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzsg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.644186 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2101f59b-4610-4451-83eb-86fe80385cf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:54:40Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 06:54:40.367563 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 06:54:40.368234 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:54:40.369436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4080492758/tls.crt::/tmp/serving-cert-4080492758/tls.key\\\\\\\"\\\\nI0121 06:54:40.606405 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:54:40.609631 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:54:40.609649 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:54:40.609684 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:54:40.609691 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:54:40.617391 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 06:54:40.617410 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617413 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:54:40.617418 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:54:40.617421 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:54:40.617423 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:54:40.617426 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 06:54:40.617614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 06:54:40.618646 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.695235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.695281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.695292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.695309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.695320 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.705239 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077e47b3-6224-4749-9710-d2b308b43208\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa06c3d835def34e52c4a9b4b87d9dc8998cdefbb5eaf7c8046bf263857ef8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://553f6c2b8ff41184065bcf707d657326891027d0c5b8390ce50f53cdfa654d2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30521319002f52220ec6c1e4c92862f5a81e1dcace01f4a4474e3a2441b955c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.723037 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50340650-5fb6-4aca-b931-b2ae5e1754b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c820663d2de329853dbd3b67c91a5491f9000bc6f1f9cd5143be1c50d06279aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9235f82557cdaf86d385d1660b19da09a65edcdffa915b36f633d597599f05ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.741291 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.757148 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h28gn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c6ae7-fdf7-44d1-ae88-f6abbb247f93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef1a4b3d1dc6d23382f8cbbc07674981a9fb90c5068318d8f78e87b0af85b5ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://485ce084cc65618894b72b880fc32f4c1e308b0f619743b5bb3f92ab5d1ad6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06db8cad138692765ed52fcd212df45e9957386a245b2c85542f68f9179c8214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8da16fd036f77c0a28f53fb7a400466d95b9a9c7b1e7ff06017a8b241a13043e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b7e47c3566636426786d60340c6e933ba06611f5ac454597886ba400d93f22d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b6cfa150ff457d94bc529c31f9b0dbb8dfd7a3b7388b95ff9479b115795736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cc1e630d2e854e97d3e156ca2c28de365e095aaf1fe7b6779d2a6b938c51024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cshbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h28gn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.768958 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-42mq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc8e905-b368-49e8-adfa-31890665e5ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cefc1948611ccad178b25e80e75e81bdf1b4b578d3fb58fa7c342d22debadd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-42mq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.781071 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2bace7a0-7349-45d1-a407-d64a31a0d41c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00d5d862b54660a5df58a9c9df0b42a453f6990789e83d5e6f67aab68471665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecad777c0a42352ca734f5f85952ab369e5cc132f06f748983d7c11949f0fe58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v88cx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p7vw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.795446 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.797714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.797746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.797757 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.797773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.797784 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.806547 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac506126-772e-4100-98f9-91c4b32882bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93c89f8db799b46df74cb753f3f21321f420d4fe1976b120ea4aa2853fbf7047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a132569453fd7635474ec4fcb0eab4aad6349e34c6d9e3bc92182433a587bfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07ce83655f22f9db0e6743147fdbde2adc1e02a0b8010cd04f6007f986cf63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29f211094b3236070df769a82ecc2ff2b03c7a44dc4af0484e4ca3b35037621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.818936 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f8eaf9a35d64680bb488050b8821c821635ec7bc1f53bdcd5bb3f5f4bfead3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.834982 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.851155 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m8k4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb64775-90e7-43a2-a5a8-4d73e348dcc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:55:32Z\\\",\\\"message\\\":\\\"2026-01-21T06:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848\\\\n2026-01-21T06:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_533ecd9b-559b-4017-8fc4-d42565d39848 to /host/opt/cni/bin/\\\\n2026-01-21T06:54:47Z [verbose] multus-daemon started\\\\n2026-01-21T06:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m8k4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.860195 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:32:05.267265985 +0000 UTC Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.861790 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jprb6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rc5gb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.877909 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ee491ea29d016cb1b74fc66b386aa8056d1d8b3c7ad207cf329749db2b4d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e705e9b341a3c711cf78ffd1fde692a9517b06fcdcfc2b96543d826c72c5484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.889454 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1095483a1c6cc4597500607b4423c12c3fc03500c2f3b8f3fc5fc6eae8c34d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.899875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.899925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.899940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.899959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.899971 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:49Z","lastTransitionTime":"2026-01-21T06:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:49 crc kubenswrapper[4893]: I0121 06:55:49.901405 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wlrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e26ce1b-e6f7-4612-aa11-69ad21c97870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64b144aa65cc6fbbe03fe4268155648a64e7360a0415e11a86fbc0373af5a4d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j65k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:54:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wlrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:49Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.002544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.002617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.002630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.002650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.002716 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.107094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.107152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.107164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.107184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.107198 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.210121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.210165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.210180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.210200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.210214 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.368982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.369020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.369031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.369048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.369060 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.471950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.472006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.472016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.472033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.472047 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.575521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.575574 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.575585 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.575604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.575616 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.580897 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.580895 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:50 crc kubenswrapper[4893]: E0121 06:55:50.581023 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.581096 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:50 crc kubenswrapper[4893]: E0121 06:55:50.581217 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:50 crc kubenswrapper[4893]: E0121 06:55:50.581340 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.678890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.678977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.678996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.679094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.679133 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.782196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.782334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.782358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.782385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.782405 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.861283 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:27:09.339109135 +0000 UTC Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.885904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.886026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.886049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.886075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.886092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.988245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.988319 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.988342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.988369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:50 crc kubenswrapper[4893]: I0121 06:55:50.988387 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:50Z","lastTransitionTime":"2026-01-21T06:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.091360 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.091404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.091414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.091430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.091442 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.196159 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.196206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.196217 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.196234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.196244 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.299299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.299341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.299349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.299365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.299376 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.432398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.432459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.432479 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.432500 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.432517 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.535264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.535365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.535409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.535442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.535466 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.580715 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:51 crc kubenswrapper[4893]: E0121 06:55:51.580907 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.637382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.637430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.637444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.637462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.637474 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.740188 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.740238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.740248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.740262 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.740272 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.842651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.842741 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.842758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.842782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.842798 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.862332 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:13:02.243682623 +0000 UTC Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.945912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.945949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.945960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.945976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:51 crc kubenswrapper[4893]: I0121 06:55:51.945987 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:51Z","lastTransitionTime":"2026-01-21T06:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.048579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.048624 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.048635 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.048655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.048665 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.151666 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.151782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.151808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.151870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.151896 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.255113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.255172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.255184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.255206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.255218 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.357983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.358054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.358076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.358108 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.358133 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.462762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.462806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.462823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.462845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.462858 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.565824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.565868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.565879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.565901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.565912 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.580160 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.580236 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.580173 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.580311 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.580444 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.580523 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.668333 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.668369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.668380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.668396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.668409 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.774726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.774784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.774812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.774837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.774861 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.840604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.840646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.840654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.840696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.840706 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.857833 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862483 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:47:06.413895963 +0000 UTC Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.862897 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.882097 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.887011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.887074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.887095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.887120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.887138 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.904052 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.909132 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.909296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.909337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.909375 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.909393 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.928535 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.931962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.932008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.932022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.932046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.932064 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.951558 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:55:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"15608b71-024b-43f0-a54d-3ca7890a281b\\\",\\\"systemUUID\\\":\\\"d58a57b5-ddc5-4868-b863-d910bc33033d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:55:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:55:52 crc kubenswrapper[4893]: E0121 06:55:52.951893 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.954257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.954318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.954339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.954363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:52 crc kubenswrapper[4893]: I0121 06:55:52.954383 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:52Z","lastTransitionTime":"2026-01-21T06:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.057273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.057358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.057379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.057432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.057452 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.161000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.161100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.161128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.161164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.161188 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.263700 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.263749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.263762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.263782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.263796 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.367238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.367336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.367377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.367408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.367431 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.470568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.470638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.470662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.470829 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.470881 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.574060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.574095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.574103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.574117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.574125 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.583371 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:53 crc kubenswrapper[4893]: E0121 06:55:53.583502 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.677239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.677286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.677303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.677320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.677334 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.780790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.780844 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.780855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.780876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.780893 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.862827 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:30:15.811871042 +0000 UTC Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.883943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.884007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.884034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.884065 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.884091 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.988413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.988484 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.988508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.988537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:53 crc kubenswrapper[4893]: I0121 06:55:53.988560 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:53Z","lastTransitionTime":"2026-01-21T06:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.093374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.093514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.093538 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.093572 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.093608 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.197027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.197084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.197100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.197125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.197142 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.301008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.301089 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.301114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.301144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.301167 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.404019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.404081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.404105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.404133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.404155 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.509547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.509597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.509611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.509654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.509697 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.580846 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.580937 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:54 crc kubenswrapper[4893]: E0121 06:55:54.581128 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:54 crc kubenswrapper[4893]: E0121 06:55:54.581460 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.581500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:54 crc kubenswrapper[4893]: E0121 06:55:54.581621 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.597905 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.612128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.612171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.612181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.612197 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.612206 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.714152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.714192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.714199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.714214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.714224 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.817809 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.817863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.817875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.817891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.817906 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.863526 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:59:09.288785752 +0000 UTC Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.920081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.920120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.920129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.920145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:54 crc kubenswrapper[4893]: I0121 06:55:54.920155 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:54Z","lastTransitionTime":"2026-01-21T06:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.022150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.022186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.022195 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.022208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.022216 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.124702 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.124749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.124767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.124785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.124800 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.227889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.227935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.227951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.227974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.227991 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.332171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.332277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.332310 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.332341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.332363 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.436931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.436980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.436992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.437008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.437021 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.539344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.539603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.539784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.539914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.540005 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.581091 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:55 crc kubenswrapper[4893]: E0121 06:55:55.581574 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.642876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.642936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.642957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.642977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.642987 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.745709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.745747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.745761 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.745779 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.745791 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.848907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.848961 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.848972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.848992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.849003 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.864621 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:46:04.573305925 +0000 UTC Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.952120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.952164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.952173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.952187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:55 crc kubenswrapper[4893]: I0121 06:55:55.952195 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:55Z","lastTransitionTime":"2026-01-21T06:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.055041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.055084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.055093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.055107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.055118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.157965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.158025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.158044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.158071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.158085 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.265971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.266011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.266024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.266042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.266055 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.368830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.368874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.368887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.368937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.368948 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.471606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.471656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.471696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.471714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.471726 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.574099 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.574390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.574472 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.574575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.574665 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.580451 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.580554 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.580540 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:56 crc kubenswrapper[4893]: E0121 06:55:56.580951 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:56 crc kubenswrapper[4893]: E0121 06:55:56.581013 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:56 crc kubenswrapper[4893]: E0121 06:55:56.581064 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.677606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.677735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.677771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.677817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.677846 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.780883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.780953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.780976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.781005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.781027 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.864898 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:00:08.33024534 +0000 UTC Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.884868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.884931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.884952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.885034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.885056 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.986980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.987015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.987025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.987041 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:56 crc kubenswrapper[4893]: I0121 06:55:56.987052 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:56Z","lastTransitionTime":"2026-01-21T06:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.089731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.089799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.089823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.089851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.089871 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.193189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.193255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.193277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.193308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.193330 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.295551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.295598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.295607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.295623 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.295635 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.398491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.398548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.398562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.398582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.398598 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.501179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.501225 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.501236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.501250 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.501259 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.580076 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:57 crc kubenswrapper[4893]: E0121 06:55:57.580252 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.603415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.603487 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.603506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.603530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.603549 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.706292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.706343 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.706353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.706371 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.706386 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.809658 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.809745 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.809758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.809776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.809789 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.865773 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:12:37.045959836 +0000 UTC Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.912758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.912817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.912835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.912857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:57 crc kubenswrapper[4893]: I0121 06:55:57.912872 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:57Z","lastTransitionTime":"2026-01-21T06:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.015621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.015706 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.015730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.015753 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.015767 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.118564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.118615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.118629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.118646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.118659 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.222186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.222247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.222265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.222284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.222298 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.325092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.325183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.325200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.325219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.325232 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.429215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.429294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.429307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.429328 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.429343 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.531741 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.531795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.531807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.531827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.531838 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.580116 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.580177 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.580137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:55:58 crc kubenswrapper[4893]: E0121 06:55:58.580398 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:55:58 crc kubenswrapper[4893]: E0121 06:55:58.580744 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:55:58 crc kubenswrapper[4893]: E0121 06:55:58.580767 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.634501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.634547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.634557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.634572 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.634581 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.737653 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.737715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.737725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.737740 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.737750 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.841460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.841506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.841516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.841540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.841550 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.865928 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:07:52.482415511 +0000 UTC Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.943968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.943999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.944009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.944024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:58 crc kubenswrapper[4893]: I0121 06:55:58.944033 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:58Z","lastTransitionTime":"2026-01-21T06:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.047058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.047133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.047142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.047158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.047169 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.150493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.150566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.150586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.150613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.150631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.254117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.254183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.254206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.254235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.254258 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.357290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.357336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.357349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.357365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.357377 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.459552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.459620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.459638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.459697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.459717 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.562607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.562650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.562660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.562692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.562704 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.580127 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:55:59 crc kubenswrapper[4893]: E0121 06:55:59.580253 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.609702 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.60962637 podStartE2EDuration="5.60962637s" podCreationTimestamp="2026-01-21 06:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.609332731 +0000 UTC m=+100.839678633" watchObservedRunningTime="2026-01-21 06:55:59.60962637 +0000 UTC m=+100.839972272" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.665801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.665837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.665869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.665885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.665895 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.683011 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.682988689 podStartE2EDuration="46.682988689s" podCreationTimestamp="2026-01-21 06:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.661362425 +0000 UTC m=+100.891708327" watchObservedRunningTime="2026-01-21 06:55:59.682988689 +0000 UTC m=+100.913334591" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.722090 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-m8k4g" podStartSLOduration=78.722074661 podStartE2EDuration="1m18.722074661s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.721724071 +0000 UTC m=+100.952069973" watchObservedRunningTime="2026-01-21 06:55:59.722074661 +0000 UTC m=+100.952420563" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.760633 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wlrc6" podStartSLOduration=78.760600627 podStartE2EDuration="1m18.760600627s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.759516254 +0000 UTC m=+100.989862176" watchObservedRunningTime="2026-01-21 06:55:59.760600627 +0000 UTC m=+100.990946569" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.768810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.768883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.768910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.768941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.768962 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.777415 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-h28gn" podStartSLOduration=78.777384744 podStartE2EDuration="1m18.777384744s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.777334913 +0000 UTC m=+101.007680815" watchObservedRunningTime="2026-01-21 06:55:59.777384744 +0000 UTC m=+101.007730686" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.888528 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:06:53.693435416 +0000 UTC Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.890511 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.890555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.890564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.890579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.890589 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.896774 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podStartSLOduration=78.896763546 podStartE2EDuration="1m18.896763546s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.895951381 +0000 UTC m=+101.126297283" watchObservedRunningTime="2026-01-21 06:55:59.896763546 +0000 UTC m=+101.127109448" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.897259 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-42mq5" podStartSLOduration=78.897254671 podStartE2EDuration="1m18.897254671s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.795136101 +0000 UTC m=+101.025482003" watchObservedRunningTime="2026-01-21 06:55:59.897254671 +0000 UTC m=+101.127600573" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.932021 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.932001902 podStartE2EDuration="1m19.932001902s" podCreationTimestamp="2026-01-21 06:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.931306551 +0000 UTC m=+101.161652453" watchObservedRunningTime="2026-01-21 06:55:59.932001902 +0000 UTC m=+101.162347804" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.948028 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.948002686 podStartE2EDuration="1m18.948002686s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.947232702 +0000 UTC m=+101.177578604" watchObservedRunningTime="2026-01-21 06:55:59.948002686 +0000 UTC m=+101.178348598" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.961183 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.961161964 podStartE2EDuration="31.961161964s" podCreationTimestamp="2026-01-21 06:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.959298308 +0000 UTC m=+101.189644210" watchObservedRunningTime="2026-01-21 06:55:59.961161964 +0000 UTC m=+101.191507866" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.993189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.993221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.993231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.993245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:55:59 crc kubenswrapper[4893]: I0121 06:55:59.993255 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:55:59Z","lastTransitionTime":"2026-01-21T06:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.096177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.096215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.096227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.096244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.096257 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.198861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.198922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.198955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.198977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.199012 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.301223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.301286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.301297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.301317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.301332 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.404430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.404477 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.404486 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.404503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.404514 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.507482 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.507529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.507542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.507559 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.507570 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.580796 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.580796 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.580808 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:00 crc kubenswrapper[4893]: E0121 06:56:00.581081 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:00 crc kubenswrapper[4893]: E0121 06:56:00.581283 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:00 crc kubenswrapper[4893]: E0121 06:56:00.581351 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.611174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.611227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.611244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.611263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.611277 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.714214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.714283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.714344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.714393 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.714410 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.817619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.817734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.817758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.817788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.817809 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.888919 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:52:08.179675877 +0000 UTC Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.921100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.921171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.921196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.921225 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:00 crc kubenswrapper[4893]: I0121 06:56:00.921247 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:00Z","lastTransitionTime":"2026-01-21T06:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.029523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.029582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.029591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.029607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.029616 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.132986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.133056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.133079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.133133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.133156 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.236270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.236353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.236379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.236412 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.236438 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.339877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.339935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.339947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.339966 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.339979 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.442488 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.442581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.442595 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.442615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.442626 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.549225 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.549271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.549282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.549299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.549310 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.581019 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:01 crc kubenswrapper[4893]: E0121 06:56:01.581413 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.581598 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:56:01 crc kubenswrapper[4893]: E0121 06:56:01.581767 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.652611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.652717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.652754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.652777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.652798 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.755256 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.755329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.755348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.755374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.755394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.858633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.859092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.859296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.859507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.859800 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.889051 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:06:35.210902821 +0000 UTC Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.908836 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:01 crc kubenswrapper[4893]: E0121 06:56:01.909066 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:56:01 crc kubenswrapper[4893]: E0121 06:56:01.909174 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs podName:e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8 nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.90914525 +0000 UTC m=+167.139491192 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs") pod "network-metrics-daemon-rc5gb" (UID: "e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.963059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.963108 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.963122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.963144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:01 crc kubenswrapper[4893]: I0121 06:56:01.963161 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:01Z","lastTransitionTime":"2026-01-21T06:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.066002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.066070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.066092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.066120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.066142 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.169339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.169406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.169423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.169447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.169463 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.272416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.272458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.272469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.272484 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.272496 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.375373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.375418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.375430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.375447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.375456 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.477427 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.477498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.477510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.477526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.477536 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.579876 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.579905 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.579875 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:02 crc kubenswrapper[4893]: E0121 06:56:02.580025 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:02 crc kubenswrapper[4893]: E0121 06:56:02.580110 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:02 crc kubenswrapper[4893]: E0121 06:56:02.580187 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.580443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.580499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.580516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.580538 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.580555 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.682522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.682575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.682590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.682610 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.682625 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.785804 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.785879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.785904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.785935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.785956 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.888466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.888533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.888546 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.888564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.888580 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.889622 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:52:39.937424665 +0000 UTC Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.992073 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.992119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.992128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.992147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:02 crc kubenswrapper[4893]: I0121 06:56:02.992160 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:02Z","lastTransitionTime":"2026-01-21T06:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.094872 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.094932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.094949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.094972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.094990 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:03Z","lastTransitionTime":"2026-01-21T06:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.198660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.198785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.198805 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.198836 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.198860 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:03Z","lastTransitionTime":"2026-01-21T06:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.211534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.211603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.211629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.211724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.211753 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:56:03Z","lastTransitionTime":"2026-01-21T06:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.275794 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p7vw6" podStartSLOduration=81.275773841 podStartE2EDuration="1m21.275773841s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:55:59.999080671 +0000 UTC m=+101.229426573" watchObservedRunningTime="2026-01-21 06:56:03.275773841 +0000 UTC m=+104.506119753" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.276067 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j"] Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.276659 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.281638 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.281944 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.282311 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.282423 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.323193 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.323292 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.323350 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.323403 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.323451 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424334 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424425 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424471 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424482 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424581 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424616 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.424808 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.425755 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.431349 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.448463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab1699d9-b8ec-4aa7-83fc-64434ad5873e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qh48j\" (UID: \"ab1699d9-b8ec-4aa7-83fc-64434ad5873e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.580454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:03 crc kubenswrapper[4893]: E0121 06:56:03.581169 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.599527 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.890583 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:33:15.342145931 +0000 UTC Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.890652 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 06:56:03 crc kubenswrapper[4893]: I0121 06:56:03.900777 4893 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.238702 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" event={"ID":"ab1699d9-b8ec-4aa7-83fc-64434ad5873e","Type":"ContainerStarted","Data":"4eaa8ba99eb2859ec112fd0888ee1f3b220cc70c293648a546336b546ad2fec7"} Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.238756 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" event={"ID":"ab1699d9-b8ec-4aa7-83fc-64434ad5873e","Type":"ContainerStarted","Data":"edc0bc0bdd150c960a918009972dbb04dd69175ddbf77195931ba261b6a13070"} Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.257115 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qh48j" podStartSLOduration=83.257091997 podStartE2EDuration="1m23.257091997s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:56:04.256495928 +0000 UTC m=+105.486841840" watchObservedRunningTime="2026-01-21 06:56:04.257091997 +0000 UTC m=+105.487437909" Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.580082 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.580112 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:04 crc kubenswrapper[4893]: E0121 06:56:04.580308 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:04 crc kubenswrapper[4893]: E0121 06:56:04.580413 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:04 crc kubenswrapper[4893]: I0121 06:56:04.580863 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:04 crc kubenswrapper[4893]: E0121 06:56:04.581032 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:05 crc kubenswrapper[4893]: I0121 06:56:05.580877 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:05 crc kubenswrapper[4893]: E0121 06:56:05.581171 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:06 crc kubenswrapper[4893]: I0121 06:56:06.580210 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:06 crc kubenswrapper[4893]: I0121 06:56:06.580301 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:06 crc kubenswrapper[4893]: E0121 06:56:06.580384 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:06 crc kubenswrapper[4893]: I0121 06:56:06.580430 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:06 crc kubenswrapper[4893]: E0121 06:56:06.580440 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:06 crc kubenswrapper[4893]: E0121 06:56:06.580717 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:07 crc kubenswrapper[4893]: I0121 06:56:07.580118 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:07 crc kubenswrapper[4893]: E0121 06:56:07.580271 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:08 crc kubenswrapper[4893]: I0121 06:56:08.580454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:08 crc kubenswrapper[4893]: E0121 06:56:08.581323 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:08 crc kubenswrapper[4893]: I0121 06:56:08.580495 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:08 crc kubenswrapper[4893]: I0121 06:56:08.580472 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:08 crc kubenswrapper[4893]: E0121 06:56:08.581739 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:08 crc kubenswrapper[4893]: E0121 06:56:08.581636 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:09 crc kubenswrapper[4893]: I0121 06:56:09.580298 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:09 crc kubenswrapper[4893]: E0121 06:56:09.582255 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:10 crc kubenswrapper[4893]: I0121 06:56:10.580825 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:10 crc kubenswrapper[4893]: I0121 06:56:10.580904 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:10 crc kubenswrapper[4893]: E0121 06:56:10.580990 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:10 crc kubenswrapper[4893]: I0121 06:56:10.581209 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:10 crc kubenswrapper[4893]: E0121 06:56:10.581322 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:10 crc kubenswrapper[4893]: E0121 06:56:10.581194 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:11 crc kubenswrapper[4893]: I0121 06:56:11.579928 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:11 crc kubenswrapper[4893]: E0121 06:56:11.580087 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:12 crc kubenswrapper[4893]: I0121 06:56:12.580205 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:12 crc kubenswrapper[4893]: I0121 06:56:12.580206 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:12 crc kubenswrapper[4893]: I0121 06:56:12.580320 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:12 crc kubenswrapper[4893]: E0121 06:56:12.580444 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:12 crc kubenswrapper[4893]: E0121 06:56:12.580533 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:12 crc kubenswrapper[4893]: E0121 06:56:12.580577 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:13 crc kubenswrapper[4893]: I0121 06:56:13.580974 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:13 crc kubenswrapper[4893]: E0121 06:56:13.581557 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:13 crc kubenswrapper[4893]: I0121 06:56:13.582117 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:56:13 crc kubenswrapper[4893]: E0121 06:56:13.582405 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:56:14 crc kubenswrapper[4893]: I0121 06:56:14.580781 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:14 crc kubenswrapper[4893]: I0121 06:56:14.580820 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:14 crc kubenswrapper[4893]: I0121 06:56:14.580852 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:14 crc kubenswrapper[4893]: E0121 06:56:14.581901 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:14 crc kubenswrapper[4893]: E0121 06:56:14.581973 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:14 crc kubenswrapper[4893]: E0121 06:56:14.582096 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:15 crc kubenswrapper[4893]: I0121 06:56:15.627484 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:15 crc kubenswrapper[4893]: E0121 06:56:15.627663 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:16 crc kubenswrapper[4893]: I0121 06:56:16.580276 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:16 crc kubenswrapper[4893]: I0121 06:56:16.580303 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:16 crc kubenswrapper[4893]: I0121 06:56:16.580369 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:16 crc kubenswrapper[4893]: E0121 06:56:16.581080 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:16 crc kubenswrapper[4893]: E0121 06:56:16.581274 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:16 crc kubenswrapper[4893]: E0121 06:56:16.581519 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:17 crc kubenswrapper[4893]: I0121 06:56:17.580479 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:17 crc kubenswrapper[4893]: E0121 06:56:17.580713 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:18 crc kubenswrapper[4893]: I0121 06:56:18.580768 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:18 crc kubenswrapper[4893]: I0121 06:56:18.580888 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:18 crc kubenswrapper[4893]: E0121 06:56:18.580914 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:18 crc kubenswrapper[4893]: I0121 06:56:18.580791 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:18 crc kubenswrapper[4893]: E0121 06:56:18.581091 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:18 crc kubenswrapper[4893]: E0121 06:56:18.581190 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:19 crc kubenswrapper[4893]: I0121 06:56:19.580056 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:19 crc kubenswrapper[4893]: E0121 06:56:19.581396 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:19 crc kubenswrapper[4893]: E0121 06:56:19.605268 4893 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 06:56:19 crc kubenswrapper[4893]: E0121 06:56:19.691357 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.300019 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/1.log" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.300539 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/0.log" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.300581 4893 generic.go:334] "Generic (PLEG): container finished" podID="ecb64775-90e7-43a2-a5a8-4d73e348dcc4" containerID="11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9" exitCode=1 Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.300611 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerDied","Data":"11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9"} Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.300646 4893 scope.go:117] "RemoveContainer" containerID="1f4a3074a4406cdbdf07c7289f9304d66e2b84b46bf0ac9c6aadf31817539dda" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.301018 4893 scope.go:117] "RemoveContainer" containerID="11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9" Jan 21 06:56:20 crc kubenswrapper[4893]: E0121 06:56:20.301182 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-m8k4g_openshift-multus(ecb64775-90e7-43a2-a5a8-4d73e348dcc4)\"" pod="openshift-multus/multus-m8k4g" podUID="ecb64775-90e7-43a2-a5a8-4d73e348dcc4" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.580111 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.580167 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:20 crc kubenswrapper[4893]: I0121 06:56:20.580197 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:20 crc kubenswrapper[4893]: E0121 06:56:20.580891 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:20 crc kubenswrapper[4893]: E0121 06:56:20.581050 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:20 crc kubenswrapper[4893]: E0121 06:56:20.581356 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:21 crc kubenswrapper[4893]: I0121 06:56:21.305916 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/1.log" Jan 21 06:56:21 crc kubenswrapper[4893]: I0121 06:56:21.580566 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:21 crc kubenswrapper[4893]: E0121 06:56:21.580764 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:22 crc kubenswrapper[4893]: I0121 06:56:22.579894 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:22 crc kubenswrapper[4893]: I0121 06:56:22.579983 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:22 crc kubenswrapper[4893]: E0121 06:56:22.580322 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:22 crc kubenswrapper[4893]: E0121 06:56:22.580715 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:22 crc kubenswrapper[4893]: I0121 06:56:22.595306 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:22 crc kubenswrapper[4893]: E0121 06:56:22.595501 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:23 crc kubenswrapper[4893]: I0121 06:56:23.584201 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:23 crc kubenswrapper[4893]: E0121 06:56:23.585179 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:24 crc kubenswrapper[4893]: I0121 06:56:24.581025 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:24 crc kubenswrapper[4893]: E0121 06:56:24.581859 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:24 crc kubenswrapper[4893]: I0121 06:56:24.581086 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:24 crc kubenswrapper[4893]: I0121 06:56:24.581015 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:24 crc kubenswrapper[4893]: E0121 06:56:24.582458 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:24 crc kubenswrapper[4893]: E0121 06:56:24.582719 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:24 crc kubenswrapper[4893]: E0121 06:56:24.692476 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:56:25 crc kubenswrapper[4893]: I0121 06:56:25.580894 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:25 crc kubenswrapper[4893]: E0121 06:56:25.581764 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:25 crc kubenswrapper[4893]: I0121 06:56:25.582186 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:56:25 crc kubenswrapper[4893]: E0121 06:56:25.582466 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzsg6_openshift-ovn-kubernetes(6719fb30-da06-4964-b730-09e444618d94)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" Jan 21 06:56:26 crc kubenswrapper[4893]: I0121 06:56:26.580132 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:26 crc kubenswrapper[4893]: I0121 06:56:26.580193 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:26 crc kubenswrapper[4893]: I0121 06:56:26.580247 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:26 crc kubenswrapper[4893]: E0121 06:56:26.580321 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:26 crc kubenswrapper[4893]: E0121 06:56:26.580445 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:26 crc kubenswrapper[4893]: E0121 06:56:26.580600 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:27 crc kubenswrapper[4893]: I0121 06:56:27.580530 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:27 crc kubenswrapper[4893]: E0121 06:56:27.581406 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:28 crc kubenswrapper[4893]: I0121 06:56:28.580202 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:28 crc kubenswrapper[4893]: E0121 06:56:28.580319 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:28 crc kubenswrapper[4893]: I0121 06:56:28.581011 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:28 crc kubenswrapper[4893]: E0121 06:56:28.581248 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:28 crc kubenswrapper[4893]: I0121 06:56:28.581013 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:28 crc kubenswrapper[4893]: E0121 06:56:28.581406 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:29 crc kubenswrapper[4893]: I0121 06:56:29.580969 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:29 crc kubenswrapper[4893]: E0121 06:56:29.582845 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:29 crc kubenswrapper[4893]: E0121 06:56:29.693756 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:56:30 crc kubenswrapper[4893]: I0121 06:56:30.580754 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:30 crc kubenswrapper[4893]: E0121 06:56:30.580911 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:30 crc kubenswrapper[4893]: I0121 06:56:30.581099 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:30 crc kubenswrapper[4893]: I0121 06:56:30.581142 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:30 crc kubenswrapper[4893]: E0121 06:56:30.582081 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:30 crc kubenswrapper[4893]: E0121 06:56:30.582358 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:31 crc kubenswrapper[4893]: I0121 06:56:31.580624 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:31 crc kubenswrapper[4893]: E0121 06:56:31.580913 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:32 crc kubenswrapper[4893]: I0121 06:56:32.580914 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:32 crc kubenswrapper[4893]: I0121 06:56:32.580963 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:32 crc kubenswrapper[4893]: I0121 06:56:32.580925 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:32 crc kubenswrapper[4893]: E0121 06:56:32.581151 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:32 crc kubenswrapper[4893]: E0121 06:56:32.581316 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:32 crc kubenswrapper[4893]: E0121 06:56:32.581409 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:33 crc kubenswrapper[4893]: I0121 06:56:33.580186 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:33 crc kubenswrapper[4893]: E0121 06:56:33.580586 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:33 crc kubenswrapper[4893]: I0121 06:56:33.581959 4893 scope.go:117] "RemoveContainer" containerID="11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9" Jan 21 06:56:34 crc kubenswrapper[4893]: I0121 06:56:34.371420 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/1.log" Jan 21 06:56:34 crc kubenswrapper[4893]: I0121 06:56:34.371820 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerStarted","Data":"195c12ac6c297634c8ec3caa12286ce86474bd4ffa41f09ca2b9933123488f7c"} Jan 21 06:56:34 crc kubenswrapper[4893]: I0121 06:56:34.580762 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:34 crc kubenswrapper[4893]: I0121 06:56:34.580814 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:34 crc kubenswrapper[4893]: I0121 06:56:34.580765 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:34 crc kubenswrapper[4893]: E0121 06:56:34.580917 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:34 crc kubenswrapper[4893]: E0121 06:56:34.581029 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:34 crc kubenswrapper[4893]: E0121 06:56:34.581129 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:34 crc kubenswrapper[4893]: E0121 06:56:34.695755 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:56:35 crc kubenswrapper[4893]: I0121 06:56:35.580755 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:35 crc kubenswrapper[4893]: E0121 06:56:35.580981 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:36 crc kubenswrapper[4893]: I0121 06:56:36.671259 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:36 crc kubenswrapper[4893]: I0121 06:56:36.671350 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:36 crc kubenswrapper[4893]: I0121 06:56:36.671366 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:36 crc kubenswrapper[4893]: E0121 06:56:36.671507 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:36 crc kubenswrapper[4893]: I0121 06:56:36.671718 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:36 crc kubenswrapper[4893]: E0121 06:56:36.671802 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:36 crc kubenswrapper[4893]: E0121 06:56:36.671886 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:36 crc kubenswrapper[4893]: E0121 06:56:36.671965 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:37 crc kubenswrapper[4893]: I0121 06:56:37.581849 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.386423 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/3.log" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.390017 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerStarted","Data":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.390514 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.427709 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podStartSLOduration=117.427665278 podStartE2EDuration="1m57.427665278s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:56:38.427187793 +0000 UTC m=+139.657533715" watchObservedRunningTime="2026-01-21 06:56:38.427665278 +0000 UTC m=+139.658011180" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.580733 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.580775 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.580797 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.580742 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:38 crc kubenswrapper[4893]: E0121 06:56:38.580875 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:38 crc kubenswrapper[4893]: E0121 06:56:38.580944 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:38 crc kubenswrapper[4893]: E0121 06:56:38.581087 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:38 crc kubenswrapper[4893]: E0121 06:56:38.581206 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:38 crc kubenswrapper[4893]: I0121 06:56:38.586965 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rc5gb"] Jan 21 06:56:39 crc kubenswrapper[4893]: I0121 06:56:39.394834 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:39 crc kubenswrapper[4893]: E0121 06:56:39.395782 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:39 crc kubenswrapper[4893]: E0121 06:56:39.696896 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:56:40 crc kubenswrapper[4893]: I0121 06:56:40.580403 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:40 crc kubenswrapper[4893]: I0121 06:56:40.580464 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:40 crc kubenswrapper[4893]: E0121 06:56:40.580520 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:40 crc kubenswrapper[4893]: E0121 06:56:40.580586 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:40 crc kubenswrapper[4893]: I0121 06:56:40.580727 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:40 crc kubenswrapper[4893]: E0121 06:56:40.580799 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:41 crc kubenswrapper[4893]: I0121 06:56:41.579967 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:41 crc kubenswrapper[4893]: E0121 06:56:41.580175 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:42 crc kubenswrapper[4893]: I0121 06:56:42.580655 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:42 crc kubenswrapper[4893]: I0121 06:56:42.580654 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:42 crc kubenswrapper[4893]: E0121 06:56:42.580887 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:42 crc kubenswrapper[4893]: I0121 06:56:42.580660 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:42 crc kubenswrapper[4893]: E0121 06:56:42.581091 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:42 crc kubenswrapper[4893]: E0121 06:56:42.581226 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:43 crc kubenswrapper[4893]: I0121 06:56:43.582918 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:43 crc kubenswrapper[4893]: E0121 06:56:43.583061 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rc5gb" podUID="e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8" Jan 21 06:56:44 crc kubenswrapper[4893]: I0121 06:56:44.580593 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:44 crc kubenswrapper[4893]: I0121 06:56:44.580773 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:44 crc kubenswrapper[4893]: E0121 06:56:44.580787 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:56:44 crc kubenswrapper[4893]: I0121 06:56:44.580876 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:44 crc kubenswrapper[4893]: E0121 06:56:44.580914 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:56:44 crc kubenswrapper[4893]: E0121 06:56:44.581007 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:56:45 crc kubenswrapper[4893]: I0121 06:56:45.580431 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:56:45 crc kubenswrapper[4893]: I0121 06:56:45.583781 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:56:45 crc kubenswrapper[4893]: I0121 06:56:45.584787 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.580856 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.580909 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.580952 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.583660 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.584068 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.584892 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:56:46 crc kubenswrapper[4893]: I0121 06:56:46.586319 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.528739 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.528936 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.529005 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.529045 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:48 crc kubenswrapper[4893]: E0121 06:56:48.529122 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:58:50.529065997 +0000 UTC m=+271.759411939 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.529218 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.530020 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.535990 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.536299 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.536660 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.706377 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.717198 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:48 crc kubenswrapper[4893]: I0121 06:56:48.728017 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:56:49 crc kubenswrapper[4893]: W0121 06:56:49.151287 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1650beba1c3e1cf4c3519aba3e94430c395cfa9ba9bcda8de537b52d39793a62 WatchSource:0}: Error finding container 1650beba1c3e1cf4c3519aba3e94430c395cfa9ba9bcda8de537b52d39793a62: Status 404 returned error can't find the container with id 1650beba1c3e1cf4c3519aba3e94430c395cfa9ba9bcda8de537b52d39793a62 Jan 21 06:56:49 crc kubenswrapper[4893]: W0121 06:56:49.199213 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b97d801d640fc8d19498f4ae4047e3b97ff487e890e2ab6e5ddf32025805cac2 WatchSource:0}: Error finding container b97d801d640fc8d19498f4ae4047e3b97ff487e890e2ab6e5ddf32025805cac2: Status 404 returned error can't find the container with id b97d801d640fc8d19498f4ae4047e3b97ff487e890e2ab6e5ddf32025805cac2 Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.436563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fc9e366cd6312016500365be1e377f987f89ca9cb54fe67feda6df5b6593655b"} Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.436621 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cd8880b54490ae1538e74f732d04c71f97bd7fa9ed29794907890e7cd4832742"} Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.439138 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"124b12311dccedbb50967ccec0d9268ac23d34b954637a6f9a53dc681e0a7e81"} Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.439218 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1650beba1c3e1cf4c3519aba3e94430c395cfa9ba9bcda8de537b52d39793a62"} Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.439404 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.441225 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7ce6ceea348e9fb733475e84e4567900c411d92a5689f1a50f3781f34cb2749f"} Jan 21 06:56:49 crc kubenswrapper[4893]: I0121 06:56:49.441260 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b97d801d640fc8d19498f4ae4047e3b97ff487e890e2ab6e5ddf32025805cac2"} Jan 21 06:56:51 crc kubenswrapper[4893]: I0121 06:56:51.779096 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.880550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.922458 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jc8jx"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.923179 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.926431 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.926648 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.926851 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.927004 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.927195 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.927707 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.934309 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sn2tj"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.935121 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.941348 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.941434 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.941487 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.941649 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.941803 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.942299 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.942975 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.943163 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.944683 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.945349 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.945459 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.946043 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.946982 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.947502 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.948986 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/458a2b28-04ce-4c9f-840b-9130dfd79140-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949044 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949072 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-policies\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949097 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjm9g\" (UniqueName: \"kubernetes.io/projected/458a2b28-04ce-4c9f-840b-9130dfd79140-kube-api-access-tjm9g\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949120 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-encryption-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ctl\" (UniqueName: \"kubernetes.io/projected/cdd5c076-53ec-47bd-9cc3-df75e06b4942-kube-api-access-t6ctl\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949163 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-encryption-config\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949185 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949223 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949250 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949276 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-client\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949297 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949332 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-client\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949352 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-serving-cert\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949378 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949400 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949432 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-images\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949458 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-dir\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949479 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r749q\" (UniqueName: \"kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949499 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949539 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bznvt\" (UniqueName: \"kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949576 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-node-pullsecrets\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949599 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949637 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-serving-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949746 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-serving-cert\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949772 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949801 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-image-import-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949824 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcfx\" (UniqueName: \"kubernetes.io/projected/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-kube-api-access-cmcfx\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949851 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-config\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949878 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949900 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949923 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949964 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit-dir\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.949990 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.954711 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.955063 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.955098 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.955716 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.956063 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.956712 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.960967 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b57tt"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.961586 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.961827 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.961896 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962269 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962329 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962587 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962739 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962753 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962993 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.962294 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963456 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963557 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963592 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963706 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963793 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963849 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.963955 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lszzb"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.964451 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.964768 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6925t"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.964806 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.965101 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.965127 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.965575 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966091 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966251 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966382 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966485 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966561 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966825 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.966989 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967071 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967215 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967221 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967315 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967419 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967474 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967568 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967609 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967625 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967662 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.967806 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.971867 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.971930 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.972735 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.973122 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.973978 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.974267 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.974454 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rvfqv"] Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.975066 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.975218 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.975834 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.976092 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.977213 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.977482 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:56:53 crc kubenswrapper[4893]: I0121 06:56:53.999505 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.001767 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.002060 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-225db"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.006860 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.007748 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.008282 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.008322 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.008476 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.022649 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.023003 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.023142 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.023357 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.027201 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.027653 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.027815 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.029426 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030189 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030270 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030539 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030638 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030859 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.030972 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.031218 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.031415 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.031826 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.032323 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035008 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035167 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035214 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035712 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035777 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.035975 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.036584 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.036762 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.036912 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.038339 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.038484 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.045536 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.045921 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.047217 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.047322 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050626 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-policies\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050665 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjm9g\" (UniqueName: \"kubernetes.io/projected/458a2b28-04ce-4c9f-840b-9130dfd79140-kube-api-access-tjm9g\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050701 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-encryption-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050716 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ctl\" (UniqueName: \"kubernetes.io/projected/cdd5c076-53ec-47bd-9cc3-df75e06b4942-kube-api-access-t6ctl\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050732 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-encryption-config\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050751 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050786 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050804 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-client\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050819 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050836 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-client\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050851 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-serving-cert\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050886 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050902 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-images\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050918 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-dir\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.050935 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r749q\" (UniqueName: \"kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.051025 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.051386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-policies\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.052237 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.053936 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bznvt\" (UniqueName: \"kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.054143 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-node-pullsecrets\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.054166 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.054224 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-node-pullsecrets\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055017 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-serving-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055140 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-serving-cert\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055183 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055202 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-image-import-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcfx\" (UniqueName: \"kubernetes.io/projected/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-kube-api-access-cmcfx\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055298 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-config\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055339 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055392 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055413 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit-dir\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055434 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/458a2b28-04ce-4c9f-840b-9130dfd79140-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.055572 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.056401 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-image-import-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.057206 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.058390 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.058467 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-encryption-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.058726 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-encryption-config\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.058765 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7qn6"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.058804 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059126 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059320 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059453 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059572 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059788 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.059915 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060074 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060179 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060217 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060281 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060424 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060614 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060712 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.060764 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.063367 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-serving-ca\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.064768 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.065465 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.065504 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.069161 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-624tj"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.069365 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-serving-cert\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.069767 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.070055 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.070247 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-images\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.071436 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.071580 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.071757 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.072489 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.073472 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-serving-cert\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.073982 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.083874 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit-dir\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.085552 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-audit-dir\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.086601 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458a2b28-04ce-4c9f-840b-9130dfd79140-config\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.086754 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.087196 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-config\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.087223 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cdd5c076-53ec-47bd-9cc3-df75e06b4942-etcd-client\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.087686 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.086795 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-etcd-client\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.087934 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.088791 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.088885 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-audit\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.089815 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/458a2b28-04ce-4c9f-840b-9130dfd79140-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.089883 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdd5c076-53ec-47bd-9cc3-df75e06b4942-trusted-ca-bundle\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.089928 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.090155 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.090379 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.098533 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.102760 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.122240 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.123348 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.123865 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.127659 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r749q\" (UniqueName: \"kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q\") pod \"route-controller-manager-6576b87f9c-zggq2\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.130785 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.154486 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bznvt\" (UniqueName: \"kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.157926 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjm9g\" (UniqueName: \"kubernetes.io/projected/458a2b28-04ce-4c9f-840b-9130dfd79140-kube-api-access-tjm9g\") pod \"machine-api-operator-5694c8668f-jc8jx\" (UID: \"458a2b28-04ce-4c9f-840b-9130dfd79140\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.245884 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-service-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.245980 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6br\" (UniqueName: \"kubernetes.io/projected/3ab99a27-e16e-4f7b-a745-f478dd109a5c-kube-api-access-ql6br\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246082 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246125 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246161 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a6e98a-88c0-4855-936c-09b7c6d33b40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246254 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-serving-cert\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246292 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246307 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert\") pod \"console-f9d7485db-2k4nh\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c002ad61-0d90-47ff-8bc5-58826a3189d4-config\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246424 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246628 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246656 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246825 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246881 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246926 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przxs\" (UniqueName: \"kubernetes.io/projected/8149e5c6-d45e-408f-9e4e-4ead349e063d-kube-api-access-przxs\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.246966 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c002ad61-0d90-47ff-8bc5-58826a3189d4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.247045 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.247928 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.248303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.248368 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67a6e98a-88c0-4855-936c-09b7c6d33b40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.249350 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldvk\" (UniqueName: \"kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.249413 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c002ad61-0d90-47ff-8bc5-58826a3189d4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.249475 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581422-2f1a-4d3c-9e82-8f80435f6ece-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251583 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtd4\" (UniqueName: \"kubernetes.io/projected/04581422-2f1a-4d3c-9e82-8f80435f6ece-kube-api-access-nvtd4\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251651 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-serving-cert\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251697 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccb5181f-bb5a-4a54-8ab1-9201addd4861-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251778 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-service-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251873 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13709215-5a7f-4c5d-aa52-749e06e40842-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.251927 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qs6\" (UniqueName: \"kubernetes.io/projected/5c435717-9f91-427d-ae9c-60db11c38d34-kube-api-access-x2qs6\") pod \"downloads-7954f5f757-rvfqv\" (UID: \"5c435717-9f91-427d-ae9c-60db11c38d34\") " pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.252017 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7cj\" (UniqueName: \"kubernetes.io/projected/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-kube-api-access-ch7cj\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.255174 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257187 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54sd\" (UniqueName: \"kubernetes.io/projected/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-kube-api-access-s54sd\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257281 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f43448f-6d99-4afb-8ba8-32cc10598f76-serving-cert\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257807 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67a6e98a-88c0-4855-936c-09b7c6d33b40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257831 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-config\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257874 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581422-2f1a-4d3c-9e82-8f80435f6ece-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257899 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-client\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257975 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-trusted-ca\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.257992 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmwk\" (UniqueName: \"kubernetes.io/projected/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-kube-api-access-dnmwk\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258036 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8149e5c6-d45e-408f-9e4e-4ead349e063d-serving-cert\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258053 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258074 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-config\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258113 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67t2n\" (UniqueName: \"kubernetes.io/projected/2f43448f-6d99-4afb-8ba8-32cc10598f76-kube-api-access-67t2n\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258135 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb5181f-bb5a-4a54-8ab1-9201addd4861-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258164 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dxv\" (UniqueName: \"kubernetes.io/projected/ccb5181f-bb5a-4a54-8ab1-9201addd4861-kube-api-access-k2dxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258209 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13709215-5a7f-4c5d-aa52-749e06e40842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258272 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-config\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258296 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb858\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-kube-api-access-gb858\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258324 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-config\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.258646 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.259611 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.259983 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.260217 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.260371 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.262108 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.262803 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.263447 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fhrs8"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.264723 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.267042 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.274903 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.276139 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.279171 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ctl\" (UniqueName: \"kubernetes.io/projected/cdd5c076-53ec-47bd-9cc3-df75e06b4942-kube-api-access-t6ctl\") pod \"apiserver-76f77b778f-sn2tj\" (UID: \"cdd5c076-53ec-47bd-9cc3-df75e06b4942\") " pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.279356 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8v69v"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.280642 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.280860 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.281755 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xrz86"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.282216 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.282526 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.282579 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5mpmr"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.282972 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.283261 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.283505 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.283522 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.283533 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.283722 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.284027 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.284223 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.284465 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.286343 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.289024 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.290624 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.290720 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.290736 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.295425 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b57tt"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.299044 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hmqwx"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.299849 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sn2tj"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.299951 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.300349 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.302892 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.304917 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-225db"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.306279 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7qn6"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.306556 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.308797 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jc8jx"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.312402 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.312464 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.312476 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lszzb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.315394 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.317353 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.322219 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.322276 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.324306 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rvfqv"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.326561 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.326812 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6925t"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.328414 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.331232 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.343724 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jn2kf"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.345865 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7kx4q"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.345933 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.347166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.348232 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.349798 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.359901 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-config\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.359956 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2298316-1d7b-4a7a-9813-170541b0e9d3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.359995 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4bcd60fb-e145-4182-9bf9-fff7920936a6-machine-approver-tls\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360023 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-service-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360044 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360095 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8np75\" (UniqueName: \"kubernetes.io/projected/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-kube-api-access-8np75\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360124 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360149 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-serving-cert\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360175 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c002ad61-0d90-47ff-8bc5-58826a3189d4-config\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360195 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360216 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-srv-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360248 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcr4k\" (UniqueName: \"kubernetes.io/projected/4bcd60fb-e145-4182-9bf9-fff7920936a6-kube-api-access-fcr4k\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360272 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-profile-collector-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360313 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswnc\" (UniqueName: \"kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360341 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4h79\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-kube-api-access-r4h79\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.360366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-przxs\" (UniqueName: \"kubernetes.io/projected/8149e5c6-d45e-408f-9e4e-4ead349e063d-kube-api-access-przxs\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.391748 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.392655 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.411275 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.414927 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.415382 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c002ad61-0d90-47ff-8bc5-58826a3189d4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.415541 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1c13fcc-9ed6-4129-afc2-4f9d53716929-trusted-ca\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.415750 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-bound-sa-token\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.415894 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.416256 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-auth-proxy-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.416658 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8wg\" (UniqueName: \"kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.416740 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c002ad61-0d90-47ff-8bc5-58826a3189d4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.416781 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.417487 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2298316-1d7b-4a7a-9813-170541b0e9d3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.418130 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c002ad61-0d90-47ff-8bc5-58826a3189d4-config\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.418518 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-service-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.418562 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.418522 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-config\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.420551 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.420853 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.420944 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-serving-cert\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.421082 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-serving-cert\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.421219 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtd4\" (UniqueName: \"kubernetes.io/projected/04581422-2f1a-4d3c-9e82-8f80435f6ece-kube-api-access-nvtd4\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.421603 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.421762 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.421959 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-service-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422146 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13709215-5a7f-4c5d-aa52-749e06e40842-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422360 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/babcfbd6-7579-4d7a-9bbb-38b759d8b273-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422424 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qs6\" (UniqueName: \"kubernetes.io/projected/5c435717-9f91-427d-ae9c-60db11c38d34-kube-api-access-x2qs6\") pod \"downloads-7954f5f757-rvfqv\" (UID: \"5c435717-9f91-427d-ae9c-60db11c38d34\") " pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422466 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422571 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7cj\" (UniqueName: \"kubernetes.io/projected/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-kube-api-access-ch7cj\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422698 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f43448f-6d99-4afb-8ba8-32cc10598f76-serving-cert\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422776 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422990 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxz7j\" (UniqueName: \"kubernetes.io/projected/79950cd5-1fde-4c05-8a15-8a1b2b745e28-kube-api-access-lxz7j\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.422993 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-service-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423064 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb5181f-bb5a-4a54-8ab1-9201addd4861-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423208 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dxv\" (UniqueName: \"kubernetes.io/projected/ccb5181f-bb5a-4a54-8ab1-9201addd4861-kube-api-access-k2dxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423442 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423502 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423637 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-config\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.423780 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb858\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-kube-api-access-gb858\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424455 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c002ad61-0d90-47ff-8bc5-58826a3189d4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424551 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6br\" (UniqueName: \"kubernetes.io/projected/3ab99a27-e16e-4f7b-a745-f478dd109a5c-kube-api-access-ql6br\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424605 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424702 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424711 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424782 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a6e98a-88c0-4855-936c-09b7c6d33b40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424805 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424834 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkpw\" (UniqueName: \"kubernetes.io/projected/49096e31-1633-4328-b7e2-a4e1d4391a5b-kube-api-access-tlkpw\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424858 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424883 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424919 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424924 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8v69v"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.424972 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425003 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425027 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2z7\" (UniqueName: \"kubernetes.io/projected/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-kube-api-access-ts2z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425045 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49096e31-1633-4328-b7e2-a4e1d4391a5b-proxy-tls\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67a6e98a-88c0-4855-936c-09b7c6d33b40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425086 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qldvk\" (UniqueName: \"kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581422-2f1a-4d3c-9e82-8f80435f6ece-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425129 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccb5181f-bb5a-4a54-8ab1-9201addd4861-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425175 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425202 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581422-2f1a-4d3c-9e82-8f80435f6ece-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425223 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54sd\" (UniqueName: \"kubernetes.io/projected/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-kube-api-access-s54sd\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425242 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67a6e98a-88c0-4855-936c-09b7c6d33b40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425282 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-config\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425304 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-client\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425327 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-trusted-ca\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425344 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnmwk\" (UniqueName: \"kubernetes.io/projected/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-kube-api-access-dnmwk\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425364 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5sx\" (UniqueName: \"kubernetes.io/projected/babcfbd6-7579-4d7a-9bbb-38b759d8b273-kube-api-access-hk5sx\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425382 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-proxy-tls\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425403 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8149e5c6-d45e-408f-9e4e-4ead349e063d-serving-cert\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425419 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425441 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c13fcc-9ed6-4129-afc2-4f9d53716929-metrics-tls\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425460 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2646\" (UniqueName: \"kubernetes.io/projected/d2298316-1d7b-4a7a-9813-170541b0e9d3-kube-api-access-f2646\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425476 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425501 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-config\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67t2n\" (UniqueName: \"kubernetes.io/projected/2f43448f-6d99-4afb-8ba8-32cc10598f76-kube-api-access-67t2n\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425540 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13709215-5a7f-4c5d-aa52-749e06e40842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425559 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425578 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425601 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425621 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-images\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425652 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67a6e98a-88c0-4855-936c-09b7c6d33b40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425664 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425824 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49096e31-1633-4328-b7e2-a4e1d4391a5b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.426357 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-624tj"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.426463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-serving-cert\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.427235 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.425064 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-config\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.427415 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13709215-5a7f-4c5d-aa52-749e06e40842-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.427602 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.427770 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f43448f-6d99-4afb-8ba8-32cc10598f76-serving-cert\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.427818 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-ca\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428175 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428258 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581422-2f1a-4d3c-9e82-8f80435f6ece-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428503 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428598 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67a6e98a-88c0-4855-936c-09b7c6d33b40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428832 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccb5181f-bb5a-4a54-8ab1-9201addd4861-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.428997 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f43448f-6d99-4afb-8ba8-32cc10598f76-config\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.429302 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xrz86"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.429471 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab99a27-e16e-4f7b-a745-f478dd109a5c-config\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.430163 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8149e5c6-d45e-408f-9e4e-4ead349e063d-serving-cert\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.430183 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581422-2f1a-4d3c-9e82-8f80435f6ece-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.430994 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.431080 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5mpmr"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.431252 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8149e5c6-d45e-408f-9e4e-4ead349e063d-trusted-ca\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.431419 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13709215-5a7f-4c5d-aa52-749e06e40842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.431450 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.432267 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ab99a27-e16e-4f7b-a745-f478dd109a5c-etcd-client\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.432750 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.434804 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb5181f-bb5a-4a54-8ab1-9201addd4861-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.435018 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7kx4q"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.436423 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.438776 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.440280 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-46khb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.441303 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-46khb"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.441394 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.442222 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hmqwx"] Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.453487 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.466179 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.486304 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.507884 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526322 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8tzw\" (UniqueName: \"kubernetes.io/projected/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-kube-api-access-t8tzw\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526365 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkpw\" (UniqueName: \"kubernetes.io/projected/49096e31-1633-4328-b7e2-a4e1d4391a5b-kube-api-access-tlkpw\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526394 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526416 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d466c6f-7f88-4a34-8e57-73b83db3e871-tmpfs\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526433 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts2z7\" (UniqueName: \"kubernetes.io/projected/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-kube-api-access-ts2z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526491 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49096e31-1633-4328-b7e2-a4e1d4391a5b-proxy-tls\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526510 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgpz\" (UniqueName: \"kubernetes.io/projected/c58a6a64-ed06-4f09-b2a6-a70569a308d7-kube-api-access-hfgpz\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526538 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526691 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5sx\" (UniqueName: \"kubernetes.io/projected/babcfbd6-7579-4d7a-9bbb-38b759d8b273-kube-api-access-hk5sx\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526729 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-proxy-tls\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526762 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526787 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c13fcc-9ed6-4129-afc2-4f9d53716929-metrics-tls\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526833 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2646\" (UniqueName: \"kubernetes.io/projected/d2298316-1d7b-4a7a-9813-170541b0e9d3-kube-api-access-f2646\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526874 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526914 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526955 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.526995 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527033 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-images\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527072 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527123 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49096e31-1633-4328-b7e2-a4e1d4391a5b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527150 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2298316-1d7b-4a7a-9813-170541b0e9d3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527171 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4bcd60fb-e145-4182-9bf9-fff7920936a6-machine-approver-tls\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527199 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527221 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8np75\" (UniqueName: \"kubernetes.io/projected/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-kube-api-access-8np75\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527249 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npwdr\" (UniqueName: \"kubernetes.io/projected/d0748253-379d-4453-84cc-e9e8a9298217-kube-api-access-npwdr\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527284 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-srv-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527309 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcr4k\" (UniqueName: \"kubernetes.io/projected/4bcd60fb-e145-4182-9bf9-fff7920936a6-kube-api-access-fcr4k\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527331 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-profile-collector-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527354 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswnc\" (UniqueName: \"kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527377 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4h79\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-kube-api-access-r4h79\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527417 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1c13fcc-9ed6-4129-afc2-4f9d53716929-trusted-ca\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527439 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-bound-sa-token\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527463 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-auth-proxy-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527496 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527522 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8wg\" (UniqueName: \"kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527552 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2298316-1d7b-4a7a-9813-170541b0e9d3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527573 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527595 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527627 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527654 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/babcfbd6-7579-4d7a-9bbb-38b759d8b273-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527695 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-webhook-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527727 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527760 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjsw\" (UniqueName: \"kubernetes.io/projected/2d466c6f-7f88-4a34-8e57-73b83db3e871-kube-api-access-wbjsw\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527781 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527806 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxz7j\" (UniqueName: \"kubernetes.io/projected/79950cd5-1fde-4c05-8a15-8a1b2b745e28-kube-api-access-lxz7j\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527838 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527860 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527918 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-metrics-tls\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527941 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.527982 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.529572 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.530026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.533996 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49096e31-1633-4328-b7e2-a4e1d4391a5b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.548789 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.549018 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.549746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/49096e31-1633-4328-b7e2-a4e1d4391a5b-proxy-tls\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.552563 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.553832 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.555408 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-auth-proxy-config\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.557603 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.561744 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-profile-collector-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.585977 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-webhook-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702217 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbjsw\" (UniqueName: \"kubernetes.io/projected/2d466c6f-7f88-4a34-8e57-73b83db3e871-kube-api-access-wbjsw\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702239 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702282 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-metrics-tls\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702302 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702340 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8tzw\" (UniqueName: \"kubernetes.io/projected/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-kube-api-access-t8tzw\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702369 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d466c6f-7f88-4a34-8e57-73b83db3e871-tmpfs\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702428 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfgpz\" (UniqueName: \"kubernetes.io/projected/c58a6a64-ed06-4f09-b2a6-a70569a308d7-kube-api-access-hfgpz\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702459 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.702598 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npwdr\" (UniqueName: \"kubernetes.io/projected/d0748253-379d-4453-84cc-e9e8a9298217-kube-api-access-npwdr\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.703226 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2d466c6f-7f88-4a34-8e57-73b83db3e871-tmpfs\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.704731 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.704983 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.705019 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.706173 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.717291 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.717531 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.718107 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.718330 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.718441 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.718544 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.718637 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.722636 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2298316-1d7b-4a7a-9813-170541b0e9d3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.724888 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2298316-1d7b-4a7a-9813-170541b0e9d3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.729681 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.729838 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.730727 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1c13fcc-9ed6-4129-afc2-4f9d53716929-metrics-tls\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.734940 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.736598 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.741459 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcfx\" (UniqueName: \"kubernetes.io/projected/9c222d2f-cc26-4a57-a8e6-5a5e904b22f7-kube-api-access-cmcfx\") pod \"apiserver-7bbb656c7d-gvjgx\" (UID: \"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.742743 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1c13fcc-9ed6-4129-afc2-4f9d53716929-trusted-ca\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.746306 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.750287 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.752800 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4bcd60fb-e145-4182-9bf9-fff7920936a6-machine-approver-tls\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.765573 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.790782 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.805404 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.807017 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.829264 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.831718 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bcd60fb-e145-4182-9bf9-fff7920936a6-auth-proxy-config\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.845786 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.866271 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.870098 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-images\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.885827 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.889787 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-proxy-tls\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.905851 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.937721 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.937978 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.949755 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.949773 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.967610 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.984404 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:54 crc kubenswrapper[4893]: I0121 06:56:54.986805 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.009856 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.041794 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.047762 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jc8jx"] Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.049244 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.053835 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/babcfbd6-7579-4d7a-9bbb-38b759d8b273-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:55 crc kubenswrapper[4893]: W0121 06:56:55.061127 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52dc45a6_094c_4330_b824_0e46bd30416b.slice/crio-3330fa915537f2ae7ff945055be7e25078e09fbd49787e405de92b5c277cf6b0 WatchSource:0}: Error finding container 3330fa915537f2ae7ff945055be7e25078e09fbd49787e405de92b5c277cf6b0: Status 404 returned error can't find the container with id 3330fa915537f2ae7ff945055be7e25078e09fbd49787e405de92b5c277cf6b0 Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.065224 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.074364 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79950cd5-1fde-4c05-8a15-8a1b2b745e28-srv-cert\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.076314 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.091609 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.105972 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.119116 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:55 crc kubenswrapper[4893]: W0121 06:56:55.123920 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod198d5d30_97a4_4cc4_85be_4d930e84c2c6.slice/crio-13f199120854bc21f43a6d99a8a84913826983929502e5682c73c710429cd826 WatchSource:0}: Error finding container 13f199120854bc21f43a6d99a8a84913826983929502e5682c73c710429cd826: Status 404 returned error can't find the container with id 13f199120854bc21f43a6d99a8a84913826983929502e5682c73c710429cd826 Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.127072 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.130124 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-webhook-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.138096 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d466c6f-7f88-4a34-8e57-73b83db3e871-apiservice-cert\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.147795 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.167645 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.183209 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-sn2tj"] Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.186364 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:56:55 crc kubenswrapper[4893]: W0121 06:56:55.200524 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdd5c076_53ec_47bd_9cc3_df75e06b4942.slice/crio-fa4f2682bce3be22a9a8e0d65292cd9fcee883ec1501781ad3f21d738c068de7 WatchSource:0}: Error finding container fa4f2682bce3be22a9a8e0d65292cd9fcee883ec1501781ad3f21d738c068de7: Status 404 returned error can't find the container with id fa4f2682bce3be22a9a8e0d65292cd9fcee883ec1501781ad3f21d738c068de7 Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.205514 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.225639 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.234385 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx"] Jan 21 06:56:55 crc kubenswrapper[4893]: W0121 06:56:55.245529 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c222d2f_cc26_4a57_a8e6_5a5e904b22f7.slice/crio-fec6753feae8c231c9046129c224b1821162312635b95a5538989ce48f185528 WatchSource:0}: Error finding container fec6753feae8c231c9046129c224b1821162312635b95a5538989ce48f185528: Status 404 returned error can't find the container with id fec6753feae8c231c9046129c224b1821162312635b95a5538989ce48f185528 Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.247037 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.276044 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.285761 4893 request.go:700] Waited for 1.01020555s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.287298 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.308197 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.317983 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-metrics-tls\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.325773 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.345982 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.366926 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.386174 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.406184 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.425529 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.446287 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.466503 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.467218 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" event={"ID":"cdd5c076-53ec-47bd-9cc3-df75e06b4942","Type":"ContainerStarted","Data":"fa4f2682bce3be22a9a8e0d65292cd9fcee883ec1501781ad3f21d738c068de7"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.469226 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" event={"ID":"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7","Type":"ContainerStarted","Data":"fec6753feae8c231c9046129c224b1821162312635b95a5538989ce48f185528"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.471160 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2k4nh" event={"ID":"198d5d30-97a4-4cc4-85be-4d930e84c2c6","Type":"ContainerStarted","Data":"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.471187 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2k4nh" event={"ID":"198d5d30-97a4-4cc4-85be-4d930e84c2c6","Type":"ContainerStarted","Data":"13f199120854bc21f43a6d99a8a84913826983929502e5682c73c710429cd826"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.472592 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" event={"ID":"458a2b28-04ce-4c9f-840b-9130dfd79140","Type":"ContainerStarted","Data":"5c64aab44212087b7d287ac830de8263336d5840f4fc3cfd33b02e9a548159aa"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.472639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" event={"ID":"458a2b28-04ce-4c9f-840b-9130dfd79140","Type":"ContainerStarted","Data":"6fbea57f85cd944c1d9344347921cec5bae5efaf20729a8454cdab5a2105d4f6"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.474131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" event={"ID":"52dc45a6-094c-4330-b824-0e46bd30416b","Type":"ContainerStarted","Data":"dc3abd552ad32a1e9edf02b66ed29ec09a93edede5d608e16babeea15545928d"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.474171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" event={"ID":"52dc45a6-094c-4330-b824-0e46bd30416b","Type":"ContainerStarted","Data":"3330fa915537f2ae7ff945055be7e25078e09fbd49787e405de92b5c277cf6b0"} Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.474396 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.475845 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zggq2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.475908 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.485341 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.505399 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.526624 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.546998 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.570456 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.589777 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.627106 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.628390 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.651497 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.666356 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.686311 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.702760 4893 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.702761 4893 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.702970 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key podName:c58a6a64-ed06-4f09-b2a6-a70569a308d7 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:56.20290843 +0000 UTC m=+157.433254382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key") pod "service-ca-9c57cc56f-5mpmr" (UID: "c58a6a64-ed06-4f09-b2a6-a70569a308d7") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.703009 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert podName:d0748253-379d-4453-84cc-e9e8a9298217 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:56.202989372 +0000 UTC m=+157.433335364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert") pod "ingress-canary-7kx4q" (UID: "d0748253-379d-4453-84cc-e9e8a9298217") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.703477 4893 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: E0121 06:56:55.703556 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle podName:c58a6a64-ed06-4f09-b2a6-a70569a308d7 nodeName:}" failed. No retries permitted until 2026-01-21 06:56:56.20353919 +0000 UTC m=+157.433885092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle") pod "service-ca-9c57cc56f-5mpmr" (UID: "c58a6a64-ed06-4f09-b2a6-a70569a308d7") : failed to sync configmap cache: timed out waiting for the condition Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.705565 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.747685 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.749650 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.770055 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.786461 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.806031 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.825913 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.846467 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.866813 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.886868 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.905740 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.925590 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.948815 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.966464 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:56:55 crc kubenswrapper[4893]: I0121 06:56:55.986287 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.006477 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.025573 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.067070 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mqpbh\" (UID: \"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.246191 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.247460 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.247687 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.247729 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.248743 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-cabundle\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.250691 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-przxs\" (UniqueName: \"kubernetes.io/projected/8149e5c6-d45e-408f-9e4e-4ead349e063d-kube-api-access-przxs\") pod \"console-operator-58897d9998-b57tt\" (UID: \"8149e5c6-d45e-408f-9e4e-4ead349e063d\") " pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.252044 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c58a6a64-ed06-4f09-b2a6-a70569a308d7-signing-key\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.253800 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0748253-379d-4453-84cc-e9e8a9298217-cert\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.275594 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c002ad61-0d90-47ff-8bc5-58826a3189d4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zgm8x\" (UID: \"c002ad61-0d90-47ff-8bc5-58826a3189d4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.280099 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtd4\" (UniqueName: \"kubernetes.io/projected/04581422-2f1a-4d3c-9e82-8f80435f6ece-kube-api-access-nvtd4\") pod \"openshift-apiserver-operator-796bbdcf4f-x4pxc\" (UID: \"04581422-2f1a-4d3c-9e82-8f80435f6ece\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.285324 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6br\" (UniqueName: \"kubernetes.io/projected/3ab99a27-e16e-4f7b-a745-f478dd109a5c-kube-api-access-ql6br\") pod \"etcd-operator-b45778765-225db\" (UID: \"3ab99a27-e16e-4f7b-a745-f478dd109a5c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.286353 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnmwk\" (UniqueName: \"kubernetes.io/projected/ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7-kube-api-access-dnmwk\") pod \"cluster-samples-operator-665b6dd947-n5fjj\" (UID: \"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.289069 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dxv\" (UniqueName: \"kubernetes.io/projected/ccb5181f-bb5a-4a54-8ab1-9201addd4861-kube-api-access-k2dxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-7vwdg\" (UID: \"ccb5181f-bb5a-4a54-8ab1-9201addd4861\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.290039 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7cj\" (UniqueName: \"kubernetes.io/projected/7b2fc626-d06a-4f0c-ad8c-931c6019a06a-kube-api-access-ch7cj\") pod \"openshift-config-operator-7777fb866f-nz9cw\" (UID: \"7b2fc626-d06a-4f0c-ad8c-931c6019a06a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.298275 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb858\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-kube-api-access-gb858\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.303871 4893 request.go:700] Waited for 1.875530631s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.322858 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67t2n\" (UniqueName: \"kubernetes.io/projected/2f43448f-6d99-4afb-8ba8-32cc10598f76-kube-api-access-67t2n\") pod \"authentication-operator-69f744f599-6925t\" (UID: \"2f43448f-6d99-4afb-8ba8-32cc10598f76\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.326698 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54sd\" (UniqueName: \"kubernetes.io/projected/be4fc165-16c3-442f-b61d-bec9bbeb9b0f-kube-api-access-s54sd\") pod \"multus-admission-controller-857f4d67dd-lszzb\" (UID: \"be4fc165-16c3-442f-b61d-bec9bbeb9b0f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.348574 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67a6e98a-88c0-4855-936c-09b7c6d33b40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hmf6q\" (UID: \"67a6e98a-88c0-4855-936c-09b7c6d33b40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.363112 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qldvk\" (UniqueName: \"kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk\") pod \"collect-profiles-29482965-hgsf2\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.363284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qs6\" (UniqueName: \"kubernetes.io/projected/5c435717-9f91-427d-ae9c-60db11c38d34-kube-api-access-x2qs6\") pod \"downloads-7954f5f757-rvfqv\" (UID: \"5c435717-9f91-427d-ae9c-60db11c38d34\") " pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.366106 4893 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.367080 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13709215-5a7f-4c5d-aa52-749e06e40842-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcjxb\" (UID: \"13709215-5a7f-4c5d-aa52-749e06e40842\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.375177 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.386233 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.454071 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.454240 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.454083 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.454094 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.455103 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.458860 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.479744 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcr4k\" (UniqueName: \"kubernetes.io/projected/4bcd60fb-e145-4182-9bf9-fff7920936a6-kube-api-access-fcr4k\") pod \"machine-approver-56656f9798-r9ps2\" (UID: \"4bcd60fb-e145-4182-9bf9-fff7920936a6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.490246 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.500697 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5sx\" (UniqueName: \"kubernetes.io/projected/babcfbd6-7579-4d7a-9bbb-38b759d8b273-kube-api-access-hk5sx\") pod \"package-server-manager-789f6589d5-t7bmk\" (UID: \"babcfbd6-7579-4d7a-9bbb-38b759d8b273\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.502862 4893 generic.go:334] "Generic (PLEG): container finished" podID="9c222d2f-cc26-4a57-a8e6-5a5e904b22f7" containerID="5a8d29dd70c51c0f0a3d5ea60cc0be5b3fd18ce7681971b5bedc2a0e8c9ac719" exitCode=0 Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.502983 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" event={"ID":"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7","Type":"ContainerDied","Data":"5a8d29dd70c51c0f0a3d5ea60cc0be5b3fd18ce7681971b5bedc2a0e8c9ac719"} Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.505522 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" event={"ID":"458a2b28-04ce-4c9f-840b-9130dfd79140","Type":"ContainerStarted","Data":"1f8e4dbc1b0cb53155b95e54fc896d1b5605a448158b12646940fb4f7229020d"} Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.507857 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkpw\" (UniqueName: \"kubernetes.io/projected/49096e31-1633-4328-b7e2-a4e1d4391a5b-kube-api-access-tlkpw\") pod \"machine-config-controller-84d6567774-bcdvv\" (UID: \"49096e31-1633-4328-b7e2-a4e1d4391a5b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.507952 4893 generic.go:334] "Generic (PLEG): container finished" podID="cdd5c076-53ec-47bd-9cc3-df75e06b4942" containerID="6495ec7fd3aa7120401b1afb6d5beee26504c8b50367e1eb3383fd6bbc972031" exitCode=0 Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.508055 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts2z7\" (UniqueName: \"kubernetes.io/projected/118d8602-b5ce-4a7c-bf0c-17d74ce7ebda-kube-api-access-ts2z7\") pod \"control-plane-machine-set-operator-78cbb6b69f-p9bnb\" (UID: \"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.508076 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" event={"ID":"cdd5c076-53ec-47bd-9cc3-df75e06b4942","Type":"ContainerDied","Data":"6495ec7fd3aa7120401b1afb6d5beee26504c8b50367e1eb3383fd6bbc972031"} Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.514273 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.527280 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.536945 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.550331 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8np75\" (UniqueName: \"kubernetes.io/projected/6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30-kube-api-access-8np75\") pod \"machine-config-operator-74547568cd-d7dhc\" (UID: \"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.550609 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.551153 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.621554 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.621876 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-225db" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.622036 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.622627 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.632838 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxz7j\" (UniqueName: \"kubernetes.io/projected/79950cd5-1fde-4c05-8a15-8a1b2b745e28-kube-api-access-lxz7j\") pod \"catalog-operator-68c6474976-6zl8m\" (UID: \"79950cd5-1fde-4c05-8a15-8a1b2b745e28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.633286 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.635129 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2646\" (UniqueName: \"kubernetes.io/projected/d2298316-1d7b-4a7a-9813-170541b0e9d3-kube-api-access-f2646\") pod \"openshift-controller-manager-operator-756b6f6bc6-74trw\" (UID: \"d2298316-1d7b-4a7a-9813-170541b0e9d3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.642979 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswnc\" (UniqueName: \"kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc\") pod \"marketplace-operator-79b997595-zpm9z\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.644655 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4h79\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-kube-api-access-r4h79\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.650518 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8wg\" (UniqueName: \"kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg\") pod \"oauth-openshift-558db77b4-q7qn6\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.655688 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.661110 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.670243 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1c13fcc-9ed6-4129-afc2-4f9d53716929-bound-sa-token\") pod \"ingress-operator-5b745b69d9-624tj\" (UID: \"c1c13fcc-9ed6-4129-afc2-4f9d53716929\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.673464 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbjsw\" (UniqueName: \"kubernetes.io/projected/2d466c6f-7f88-4a34-8e57-73b83db3e871-kube-api-access-wbjsw\") pod \"packageserver-d55dfcdfc-q6lc9\" (UID: \"2d466c6f-7f88-4a34-8e57-73b83db3e871\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.815916 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8tzw\" (UniqueName: \"kubernetes.io/projected/08150f3e-0cfa-4c7d-b9af-0e2d288a7737-kube-api-access-t8tzw\") pod \"dns-operator-744455d44c-8v69v\" (UID: \"08150f3e-0cfa-4c7d-b9af-0e2d288a7737\") " pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.820492 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npwdr\" (UniqueName: \"kubernetes.io/projected/d0748253-379d-4453-84cc-e9e8a9298217-kube-api-access-npwdr\") pod \"ingress-canary-7kx4q\" (UID: \"d0748253-379d-4453-84cc-e9e8a9298217\") " pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.835851 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfgpz\" (UniqueName: \"kubernetes.io/projected/c58a6a64-ed06-4f09-b2a6-a70569a308d7-kube-api-access-hfgpz\") pod \"service-ca-9c57cc56f-5mpmr\" (UID: \"c58a6a64-ed06-4f09-b2a6-a70569a308d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.841470 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.844284 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7kx4q" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.882192 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.918128 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.918524 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919511 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-metrics-certs\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919567 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919591 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rs2\" (UniqueName: \"kubernetes.io/projected/d3add700-459a-4629-a5b1-efe434327719-kube-api-access-n7rs2\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919611 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b10f09b0-7978-4ebf-a5f6-c99737710b3f-metrics-tls\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919774 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919889 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6tlb\" (UniqueName: \"kubernetes.io/projected/33025f21-a66c-4bb2-809e-8de12fe71694-kube-api-access-p6tlb\") pod \"migrator-59844c95c7-976q5\" (UID: \"33025f21-a66c-4bb2-809e-8de12fe71694\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919930 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b10f09b0-7978-4ebf-a5f6-c99737710b3f-config-volume\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.919985 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920007 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793617a3-fd23-40b4-95f7-68e828f76816-serving-cert\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920038 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-node-bootstrap-token\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920071 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7v82\" (UniqueName: \"kubernetes.io/projected/793617a3-fd23-40b4-95f7-68e828f76816-kube-api-access-d7v82\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920138 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920195 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96180be5-06e7-4b23-80ab-1cbf4e162e67-service-ca-bundle\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920216 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnr2k\" (UniqueName: \"kubernetes.io/projected/1cd695f3-78bc-43fb-a2d0-9354a8972f69-kube-api-access-qnr2k\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920255 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-srv-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920272 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-stats-auth\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920320 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920399 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d2r\" (UniqueName: \"kubernetes.io/projected/96180be5-06e7-4b23-80ab-1cbf4e162e67-kube-api-access-r6d2r\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920467 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920537 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920581 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.920996 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976w8\" (UniqueName: \"kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921024 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-certs\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921095 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-default-certificate\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921117 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793617a3-fd23-40b4-95f7-68e828f76816-config\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921150 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921171 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921311 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvjq\" (UniqueName: \"kubernetes.io/projected/b10f09b0-7978-4ebf-a5f6-c99737710b3f-kube-api-access-phvjq\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921384 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:56 crc kubenswrapper[4893]: I0121 06:56:56.921414 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcthz\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.066412 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6tlb\" (UniqueName: \"kubernetes.io/projected/33025f21-a66c-4bb2-809e-8de12fe71694-kube-api-access-p6tlb\") pod \"migrator-59844c95c7-976q5\" (UID: \"33025f21-a66c-4bb2-809e-8de12fe71694\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.066466 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b10f09b0-7978-4ebf-a5f6-c99737710b3f-config-volume\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.066569 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnr2k\" (UniqueName: \"kubernetes.io/projected/1cd695f3-78bc-43fb-a2d0-9354a8972f69-kube-api-access-qnr2k\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.066587 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-srv-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.066604 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-stats-auth\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.072180 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-stats-auth\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.090332 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b10f09b0-7978-4ebf-a5f6-c99737710b3f-config-volume\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.093841 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-srv-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.121042 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.128636 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.131058 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.184553 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.68451709 +0000 UTC m=+158.914862992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.186289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.187067 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.687053611 +0000 UTC m=+158.917399533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187211 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187355 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187386 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793617a3-fd23-40b4-95f7-68e828f76816-serving-cert\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187411 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-node-bootstrap-token\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187445 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7v82\" (UniqueName: \"kubernetes.io/projected/793617a3-fd23-40b4-95f7-68e828f76816-kube-api-access-d7v82\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187517 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187548 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-registration-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.187703 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96180be5-06e7-4b23-80ab-1cbf4e162e67-service-ca-bundle\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.191904 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d2r\" (UniqueName: \"kubernetes.io/projected/96180be5-06e7-4b23-80ab-1cbf4e162e67-kube-api-access-r6d2r\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.191970 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192134 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192196 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192246 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-plugins-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192334 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-mountpoint-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192432 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192579 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976w8\" (UniqueName: \"kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192636 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-certs\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192763 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-default-certificate\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192804 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793617a3-fd23-40b4-95f7-68e828f76816-config\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192858 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192895 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192938 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-csi-data-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.192984 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvjq\" (UniqueName: \"kubernetes.io/projected/b10f09b0-7978-4ebf-a5f6-c99737710b3f-kube-api-access-phvjq\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193183 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193211 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcthz\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193238 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-socket-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhphh\" (UniqueName: \"kubernetes.io/projected/37347f00-99de-4215-9d76-b5d4996b5cd4-kube-api-access-xhphh\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193372 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-metrics-certs\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rs2\" (UniqueName: \"kubernetes.io/projected/d3add700-459a-4629-a5b1-efe434327719-kube-api-access-n7rs2\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.193572 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b10f09b0-7978-4ebf-a5f6-c99737710b3f-metrics-tls\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.194272 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnr2k\" (UniqueName: \"kubernetes.io/projected/1cd695f3-78bc-43fb-a2d0-9354a8972f69-kube-api-access-qnr2k\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.195740 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.203906 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6tlb\" (UniqueName: \"kubernetes.io/projected/33025f21-a66c-4bb2-809e-8de12fe71694-kube-api-access-p6tlb\") pod \"migrator-59844c95c7-976q5\" (UID: \"33025f21-a66c-4bb2-809e-8de12fe71694\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.204789 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.704772432 +0000 UTC m=+158.935118334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.224026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-node-bootstrap-token\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.228161 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.229355 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96180be5-06e7-4b23-80ab-1cbf4e162e67-service-ca-bundle\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.232755 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.235656 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.248145 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.249771 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.251060 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-metrics-certs\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.275350 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3add700-459a-4629-a5b1-efe434327719-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.349449 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.349782 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.350692 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.351123 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793617a3-fd23-40b4-95f7-68e828f76816-config\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.351849 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d2r\" (UniqueName: \"kubernetes.io/projected/96180be5-06e7-4b23-80ab-1cbf4e162e67-kube-api-access-r6d2r\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.354508 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b10f09b0-7978-4ebf-a5f6-c99737710b3f-metrics-tls\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.357878 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.359284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/793617a3-fd23-40b4-95f7-68e828f76816-serving-cert\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.361066 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.361149 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcthz\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.361593 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.861553284 +0000 UTC m=+159.091899196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.361804 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-csi-data-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.362563 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976w8\" (UniqueName: \"kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8\") pod \"controller-manager-879f6c89f-cl27x\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.363096 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-socket-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.363203 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhphh\" (UniqueName: \"kubernetes.io/projected/37347f00-99de-4215-9d76-b5d4996b5cd4-kube-api-access-xhphh\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.363363 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.363461 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-registration-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.363610 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-plugins-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.364086 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1cd695f3-78bc-43fb-a2d0-9354a8972f69-certs\") pod \"machine-config-server-jn2kf\" (UID: \"1cd695f3-78bc-43fb-a2d0-9354a8972f69\") " pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.364766 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-plugins-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.364770 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7v82\" (UniqueName: \"kubernetes.io/projected/793617a3-fd23-40b4-95f7-68e828f76816-kube-api-access-d7v82\") pod \"service-ca-operator-777779d784-xrz86\" (UID: \"793617a3-fd23-40b4-95f7-68e828f76816\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.365008 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-registration-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.365273 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-socket-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.365655 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.865636056 +0000 UTC m=+159.095981958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.367141 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.369468 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/96180be5-06e7-4b23-80ab-1cbf4e162e67-default-certificate\") pod \"router-default-5444994796-fhrs8\" (UID: \"96180be5-06e7-4b23-80ab-1cbf4e162e67\") " pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.367663 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-csi-data-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.371207 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rs2\" (UniqueName: \"kubernetes.io/projected/d3add700-459a-4629-a5b1-efe434327719-kube-api-access-n7rs2\") pod \"olm-operator-6b444d44fb-rhprd\" (UID: \"d3add700-459a-4629-a5b1-efe434327719\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.377279 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvjq\" (UniqueName: \"kubernetes.io/projected/b10f09b0-7978-4ebf-a5f6-c99737710b3f-kube-api-access-phvjq\") pod \"dns-default-hmqwx\" (UID: \"b10f09b0-7978-4ebf-a5f6-c99737710b3f\") " pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.411488 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.454242 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jn2kf" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.455026 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hmqwx" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.454294 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.456661 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.463063 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhphh\" (UniqueName: \"kubernetes.io/projected/37347f00-99de-4215-9d76-b5d4996b5cd4-kube-api-access-xhphh\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.469399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.469632 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.969598596 +0000 UTC m=+159.199944498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.469732 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-mountpoint-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.469838 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.470214 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:57.970200155 +0000 UTC m=+159.200546057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.472550 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/37347f00-99de-4215-9d76-b5d4996b5cd4-mountpoint-dir\") pod \"csi-hostpathplugin-46khb\" (UID: \"37347f00-99de-4215-9d76-b5d4996b5cd4\") " pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.479936 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh"] Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.564123 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.573242 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.579768 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.079719904 +0000 UTC m=+159.310065916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.579857 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" event={"ID":"4bcd60fb-e145-4182-9bf9-fff7920936a6","Type":"ContainerStarted","Data":"21bb5a603f5be6d367f02f09210ed64db5eab2e8c3c7847d264423854058d489"} Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.675023 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.677524 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.177502565 +0000 UTC m=+159.407848467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.816510 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.816916 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.316899776 +0000 UTC m=+159.547245678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.817028 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-46khb" Jan 21 06:56:57 crc kubenswrapper[4893]: I0121 06:56:57.927799 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:57 crc kubenswrapper[4893]: E0121 06:56:57.929617 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.429598967 +0000 UTC m=+159.659944869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.051177 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.052001 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.551981941 +0000 UTC m=+159.782327843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.153219 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.154581 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.654561436 +0000 UTC m=+159.884907348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: W0121 06:56:58.247850 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd695f3_78bc_43fb_a2d0_9354a8972f69.slice/crio-0d211c29fda3332854cf1e313993bb4e23952099a187fe54567f1ea2cdd137d1 WatchSource:0}: Error finding container 0d211c29fda3332854cf1e313993bb4e23952099a187fe54567f1ea2cdd137d1: Status 404 returned error can't find the container with id 0d211c29fda3332854cf1e313993bb4e23952099a187fe54567f1ea2cdd137d1 Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.254200 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.254769 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.754751905 +0000 UTC m=+159.985097807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: W0121 06:56:58.271011 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96180be5_06e7_4b23_80ab_1cbf4e162e67.slice/crio-8e77bdaf6308ca5e4f402d9ec5c857a3387a74e2223ba89e56aba39506c5fde3 WatchSource:0}: Error finding container 8e77bdaf6308ca5e4f402d9ec5c857a3387a74e2223ba89e56aba39506c5fde3: Status 404 returned error can't find the container with id 8e77bdaf6308ca5e4f402d9ec5c857a3387a74e2223ba89e56aba39506c5fde3 Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.379503 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.380352 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.880055072 +0000 UTC m=+160.110400974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.482900 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.487027 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:58.987005418 +0000 UTC m=+160.217351320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.529778 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.530627 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.030606033 +0000 UTC m=+160.260951945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.634708 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.635354 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.135328827 +0000 UTC m=+160.365674729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.657599 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.657802 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.685227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jn2kf" event={"ID":"1cd695f3-78bc-43fb-a2d0-9354a8972f69","Type":"ContainerStarted","Data":"0d211c29fda3332854cf1e313993bb4e23952099a187fe54567f1ea2cdd137d1"} Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.688350 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" event={"ID":"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f","Type":"ContainerStarted","Data":"e18c7dca0376c4f37b22ba53bd7f464a23e7177ec853af395d5a5adabaea3ad3"} Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.699502 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fhrs8" event={"ID":"96180be5-06e7-4b23-80ab-1cbf4e162e67","Type":"ContainerStarted","Data":"8e77bdaf6308ca5e4f402d9ec5c857a3387a74e2223ba89e56aba39506c5fde3"} Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.704249 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" event={"ID":"9c222d2f-cc26-4a57-a8e6-5a5e904b22f7","Type":"ContainerStarted","Data":"a7b04befaeb81b5278059785526238855ec3a12b393f4c2f4379c669ddeeab2c"} Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.743559 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.743989 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.243961337 +0000 UTC m=+160.474307239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:58 crc kubenswrapper[4893]: I0121 06:56:58.975408 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:58 crc kubenswrapper[4893]: E0121 06:56:58.978691 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.478652279 +0000 UTC m=+160.708998181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.027587 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jc8jx" podStartSLOduration=137.027462581 podStartE2EDuration="2m17.027462581s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:56:58.995393918 +0000 UTC m=+160.225739830" watchObservedRunningTime="2026-01-21 06:56:59.027462581 +0000 UTC m=+160.257808483" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.078549 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.080512 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.58049555 +0000 UTC m=+160.810841452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.310295 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.310588 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.810565713 +0000 UTC m=+161.040911615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.311085 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.311589 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.811575716 +0000 UTC m=+161.041921618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.426915 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.427310 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.927266444 +0000 UTC m=+161.157612356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.427446 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.427903 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:56:59.927894744 +0000 UTC m=+161.158240646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.467583 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" podStartSLOduration=137.467540791 podStartE2EDuration="2m17.467540791s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:56:59.465565028 +0000 UTC m=+160.695910930" watchObservedRunningTime="2026-01-21 06:56:59.467540791 +0000 UTC m=+160.697886693" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.529377 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.529547 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.029522879 +0000 UTC m=+161.259868781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.529733 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.530134 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.030121198 +0000 UTC m=+161.260467100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.630930 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.631359 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.131312218 +0000 UTC m=+161.361658120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.640474 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2k4nh" podStartSLOduration=138.640453033 podStartE2EDuration="2m18.640453033s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:56:59.623868449 +0000 UTC m=+160.854214351" watchObservedRunningTime="2026-01-21 06:56:59.640453033 +0000 UTC m=+160.870798935" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.733365 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.733846 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.233827582 +0000 UTC m=+161.464173544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.826299 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lszzb"] Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.828880 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b57tt"] Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.830775 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6925t"] Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.833389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" event={"ID":"cdd5c076-53ec-47bd-9cc3-df75e06b4942","Type":"ContainerStarted","Data":"6866aedf1f48a4cb5b722ac3b4df79534b59e8eb3aee0bc514a0e732b2140a63"} Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.844806 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.845307 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.345280083 +0000 UTC m=+161.575625985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.845457 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.845936 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.345922893 +0000 UTC m=+161.576268795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.938291 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.938444 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.940211 4893 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-gvjgx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.940306 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" podUID="9c222d2f-cc26-4a57-a8e6-5a5e904b22f7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.947155 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.947384 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.447357482 +0000 UTC m=+161.677703384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:56:59 crc kubenswrapper[4893]: I0121 06:56:59.947944 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:56:59 crc kubenswrapper[4893]: E0121 06:56:59.948424 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.448377635 +0000 UTC m=+161.678723537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.049409 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.049617 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.549593656 +0000 UTC m=+161.779939558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.049771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.050167 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.550158494 +0000 UTC m=+161.780504396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.058159 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg"] Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.068464 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj"] Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.172524 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.173874 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.67383997 +0000 UTC m=+161.904185872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.300108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.300478 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.800465794 +0000 UTC m=+162.030811696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.329177 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" podStartSLOduration=138.329156567 podStartE2EDuration="2m18.329156567s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:00.32611893 +0000 UTC m=+161.556464832" watchObservedRunningTime="2026-01-21 06:57:00.329156567 +0000 UTC m=+161.559502469" Jan 21 06:57:00 crc kubenswrapper[4893]: W0121 06:57:00.349443 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8149e5c6_d45e_408f_9e4e_4ead349e063d.slice/crio-37078c59659220bd77be669e6e0a2e4269257c873c853745f1202c6f7f254b16 WatchSource:0}: Error finding container 37078c59659220bd77be669e6e0a2e4269257c873c853745f1202c6f7f254b16: Status 404 returned error can't find the container with id 37078c59659220bd77be669e6e0a2e4269257c873c853745f1202c6f7f254b16 Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.408284 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.408644 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:00.908626605 +0000 UTC m=+162.138972507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.509503 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.509929 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.009908418 +0000 UTC m=+162.240254320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.611062 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.611215 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.111193452 +0000 UTC m=+162.341539354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.611369 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.611846 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.111832202 +0000 UTC m=+162.342178104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.723102 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.724822 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.224756661 +0000 UTC m=+162.455102563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: W0121 06:57:00.822532 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f43448f_6d99_4afb_8ba8_32cc10598f76.slice/crio-e7ca3acac8d56d297ab99ee40bb04a0a0f2b84b836f19b07c670611f2cf152f7 WatchSource:0}: Error finding container e7ca3acac8d56d297ab99ee40bb04a0a0f2b84b836f19b07c670611f2cf152f7: Status 404 returned error can't find the container with id e7ca3acac8d56d297ab99ee40bb04a0a0f2b84b836f19b07c670611f2cf152f7 Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.827691 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.828151 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.328136262 +0000 UTC m=+162.558482174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.928461 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:00 crc kubenswrapper[4893]: E0121 06:57:00.928719 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.428703793 +0000 UTC m=+162.659049695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.932813 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" event={"ID":"be4fc165-16c3-442f-b61d-bec9bbeb9b0f","Type":"ContainerStarted","Data":"e56adebd40d404d7c2a9ddc6aaed50354b23f12373b28954db220bd2e46e2fa2"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.934585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b57tt" event={"ID":"8149e5c6-d45e-408f-9e4e-4ead349e063d","Type":"ContainerStarted","Data":"37078c59659220bd77be669e6e0a2e4269257c873c853745f1202c6f7f254b16"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.936861 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" event={"ID":"4bcd60fb-e145-4182-9bf9-fff7920936a6","Type":"ContainerStarted","Data":"f82ad4208a84779c18db01d2afe62216e6b7ff73a6a54a680ddf2f39030179f8"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.937988 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" event={"ID":"ac52ea93-e5ed-47a3-86ad-cf8f8146ca3f","Type":"ContainerStarted","Data":"f7075e7d83bef548ac12c0642bd6b7595f5e370d4087c1b8e06ea757cc94cfc6"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.939847 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" event={"ID":"ccb5181f-bb5a-4a54-8ab1-9201addd4861","Type":"ContainerStarted","Data":"644f5b9a70049285bd182cc4679af3567137f62fa76774cca0c218e847f06918"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.941228 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" event={"ID":"2f43448f-6d99-4afb-8ba8-32cc10598f76","Type":"ContainerStarted","Data":"e7ca3acac8d56d297ab99ee40bb04a0a0f2b84b836f19b07c670611f2cf152f7"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.951416 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fhrs8" event={"ID":"96180be5-06e7-4b23-80ab-1cbf4e162e67","Type":"ContainerStarted","Data":"f872f92ed0ea63d69849f2c5435144da6639eaeab6c00033db0798420de1b220"} Jan 21 06:57:00 crc kubenswrapper[4893]: I0121 06:57:00.958014 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jn2kf" event={"ID":"1cd695f3-78bc-43fb-a2d0-9354a8972f69","Type":"ContainerStarted","Data":"3f4a5512e708d78a5414270a3656c6f60f54c9f5310080022e7e472b82b19b53"} Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.030537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.034625 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.534606795 +0000 UTC m=+162.764952697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.131396 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.132073 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.632016354 +0000 UTC m=+162.862362256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.151816 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mqpbh" podStartSLOduration=139.151794371 podStartE2EDuration="2m19.151794371s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:01.074548562 +0000 UTC m=+162.304894464" watchObservedRunningTime="2026-01-21 06:57:01.151794371 +0000 UTC m=+162.382140273" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.237395 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.238203 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.738181894 +0000 UTC m=+162.968527796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.263296 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fhrs8" podStartSLOduration=140.263258673 podStartE2EDuration="2m20.263258673s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:01.153037251 +0000 UTC m=+162.383383153" watchObservedRunningTime="2026-01-21 06:57:01.263258673 +0000 UTC m=+162.493604575" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.265090 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jn2kf" podStartSLOduration=8.265079301 podStartE2EDuration="8.265079301s" podCreationTimestamp="2026-01-21 06:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:01.222117097 +0000 UTC m=+162.452462999" watchObservedRunningTime="2026-01-21 06:57:01.265079301 +0000 UTC m=+162.495425203" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.338485 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.339315 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.839293322 +0000 UTC m=+163.069639224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.441627 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.442315 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:01.942290931 +0000 UTC m=+163.172636833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.456078 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.578175 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.578779 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.078742178 +0000 UTC m=+163.309088080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.680976 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.180959042 +0000 UTC m=+163.411304944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.681620 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.752960 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.753077 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.782431 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.782775 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.28271268 +0000 UTC m=+163.513058592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:01 crc kubenswrapper[4893]: I0121 06:57:01.782864 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:01 crc kubenswrapper[4893]: E0121 06:57:01.783613 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.283573388 +0000 UTC m=+163.513919290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:01.886934 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:01.887784 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.387759125 +0000 UTC m=+163.618105027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:01.989381 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:01.990003 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.489982949 +0000 UTC m=+163.720328851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.027357 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b57tt" event={"ID":"8149e5c6-d45e-408f-9e4e-4ead349e063d","Type":"ContainerStarted","Data":"050556bc2a6715975d453a02c9e5613f6e4128ba1a9d38d77896feb54279c3b2"} Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.028179 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.030227 4893 patch_prober.go:28] interesting pod/console-operator-58897d9998-b57tt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.030276 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b57tt" podUID="8149e5c6-d45e-408f-9e4e-4ead349e063d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.198268 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.199630 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.699610344 +0000 UTC m=+163.929956246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.209449 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" event={"ID":"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7","Type":"ContainerStarted","Data":"11e06fffcf73e5965ba3235e229ac87eaeafab0f7f3349c58d67ea2476cd1b95"} Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.213730 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" event={"ID":"cdd5c076-53ec-47bd-9cc3-df75e06b4942","Type":"ContainerStarted","Data":"0c3aaa5dac1a7239f58860b3c2032b536e0d7a5df5a907d6f1445c23bc6787ce"} Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.219050 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" event={"ID":"2f43448f-6d99-4afb-8ba8-32cc10598f76","Type":"ContainerStarted","Data":"1b85d998badb0f51ce05b8ba3d3681adc4ab76845cd1ca06b4494f6e5b2b3d41"} Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.225995 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b57tt" podStartSLOduration=141.225970852 podStartE2EDuration="2m21.225970852s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:02.225540228 +0000 UTC m=+163.455886120" watchObservedRunningTime="2026-01-21 06:57:02.225970852 +0000 UTC m=+163.456316754" Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.253715 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6925t" podStartSLOduration=141.253697215 podStartE2EDuration="2m21.253697215s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:02.252261369 +0000 UTC m=+163.482607281" watchObservedRunningTime="2026-01-21 06:57:02.253697215 +0000 UTC m=+163.484043117" Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.399957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.404869 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:02.904853566 +0000 UTC m=+164.135199468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.408656 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" podStartSLOduration=141.408642678 podStartE2EDuration="2m21.408642678s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:02.407415198 +0000 UTC m=+163.637761100" watchObservedRunningTime="2026-01-21 06:57:02.408642678 +0000 UTC m=+163.638988580" Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.501663 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.501917 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.001895103 +0000 UTC m=+164.232241015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.636765 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.637222 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.137203723 +0000 UTC m=+164.367549635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.739153 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.739374 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.239344334 +0000 UTC m=+164.469690226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.739507 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.739850 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.23984357 +0000 UTC m=+164.470189472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.869222 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.869627 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.369610561 +0000 UTC m=+164.599956463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.971335 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:02 crc kubenswrapper[4893]: E0121 06:57:02.971692 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.471656209 +0000 UTC m=+164.702002171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.975691 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb"] Jan 21 06:57:02 crc kubenswrapper[4893]: I0121 06:57:02.988518 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:02.998303 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.009078 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:03 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:03 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:03 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.009146 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:03 crc kubenswrapper[4893]: W0121 06:57:03.012941 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a6e98a_88c0_4855_936c_09b7c6d33b40.slice/crio-2a61de0e8bf147c193edb922ba1bb98da646d6c14b61133f2099a1c6b6cd07dd WatchSource:0}: Error finding container 2a61de0e8bf147c193edb922ba1bb98da646d6c14b61133f2099a1c6b6cd07dd: Status 404 returned error can't find the container with id 2a61de0e8bf147c193edb922ba1bb98da646d6c14b61133f2099a1c6b6cd07dd Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.027943 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.048448 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.143830 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.144305 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.644283411 +0000 UTC m=+164.874629313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.248339 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.248783 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.748750678 +0000 UTC m=+164.979096580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.267819 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" event={"ID":"4bcd60fb-e145-4182-9bf9-fff7920936a6","Type":"ContainerStarted","Data":"dcf884b0d5b525419d7b48a313594bf4728c94372da66435bfe473b81c8b9a58"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.349827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.350993 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.850970731 +0000 UTC m=+165.081316633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.416852 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" event={"ID":"67a6e98a-88c0-4855-936c-09b7c6d33b40","Type":"ContainerStarted","Data":"2a61de0e8bf147c193edb922ba1bb98da646d6c14b61133f2099a1c6b6cd07dd"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.430071 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r9ps2" podStartSLOduration=142.430046719 podStartE2EDuration="2m22.430046719s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:03.425045328 +0000 UTC m=+164.655391230" watchObservedRunningTime="2026-01-21 06:57:03.430046719 +0000 UTC m=+164.660392621" Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.441017 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" event={"ID":"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7","Type":"ContainerStarted","Data":"a042bade31bdb60fcef9d18bea2796264a182643eae3f53ba49462826f66ee58"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.455328 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.455692 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:03.955664545 +0000 UTC m=+165.186010447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.524938 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:03 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:03 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:03 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.525004 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.538314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" event={"ID":"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda","Type":"ContainerStarted","Data":"1e345abddaa4affc13279e32a56370182d83ac4e06f05d0a3bc75d70e6cdf509"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.619871 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.620211 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.120195296 +0000 UTC m=+165.350541188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.672944 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" event={"ID":"c002ad61-0d90-47ff-8bc5-58826a3189d4","Type":"ContainerStarted","Data":"e2a0d81fd7fd589fcb6f03e9753b25340b8fd4e4ec5952c88cb50d14eef61a1b"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.687074 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" event={"ID":"ccb5181f-bb5a-4a54-8ab1-9201addd4861","Type":"ContainerStarted","Data":"635eb42fbb93bfaadae67711dde751a5dde2e0fc926cd2c1bf13f2ad477be3aa"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.712459 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" event={"ID":"13709215-5a7f-4c5d-aa52-749e06e40842","Type":"ContainerStarted","Data":"765b4e9890cc1ac11dbcccb8a0d8bda89945b67531d042be0de94a3124b58e2b"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.729371 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.730005 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.229987484 +0000 UTC m=+165.460333386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.733639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" event={"ID":"be4fc165-16c3-442f-b61d-bec9bbeb9b0f","Type":"ContainerStarted","Data":"82bcba1fe95e530c1cee5e0ec081a767dd8362e3da5bc6ca9f92c1f2282bcf6b"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.733843 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" event={"ID":"be4fc165-16c3-442f-b61d-bec9bbeb9b0f","Type":"ContainerStarted","Data":"c56398e9f9b3b048419c25e29135a6d4ae67f98c5e3a1e49d22369727545000f"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.736420 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" event={"ID":"ff046ea4-caba-480a-8242-eb099a1f136e","Type":"ContainerStarted","Data":"b2256e98c2b16096c9a7209ec9b025bc25561fbc5762f443e94bc120159c0cd4"} Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.740621 4893 patch_prober.go:28] interesting pod/console-operator-58897d9998-b57tt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.740694 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-b57tt" podUID="8149e5c6-d45e-408f-9e4e-4ead349e063d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.752597 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7vwdg" podStartSLOduration=141.752580722 podStartE2EDuration="2m21.752580722s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:03.726565243 +0000 UTC m=+164.956911135" watchObservedRunningTime="2026-01-21 06:57:03.752580722 +0000 UTC m=+164.982926624" Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.754778 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lszzb" podStartSLOduration=141.754771372 podStartE2EDuration="2m21.754771372s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:03.752165728 +0000 UTC m=+164.982511640" watchObservedRunningTime="2026-01-21 06:57:03.754771372 +0000 UTC m=+164.985117274" Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.802614 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.807973 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5mpmr"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.826831 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.830568 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.833823 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.333799659 +0000 UTC m=+165.564145561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.836983 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rvfqv"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.857810 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-225db"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.889607 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.930204 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7qn6"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.937954 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:03 crc kubenswrapper[4893]: E0121 06:57:03.943379 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.443362619 +0000 UTC m=+165.673708521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.966126 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd"] Jan 21 06:57:03 crc kubenswrapper[4893]: I0121 06:57:03.984193 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.010714 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.012823 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.065576 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.065923 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.565905708 +0000 UTC m=+165.796251610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.169361 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.169972 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.66995651 +0000 UTC m=+165.900302412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.214527 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.259516 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.260360 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.272885 4893 patch_prober.go:28] interesting pod/console-f9d7485db-2k4nh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.273100 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2k4nh" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 06:57:04 crc kubenswrapper[4893]: W0121 06:57:04.272965 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b2fc626_d06a_4f0c_ad8c_931c6019a06a.slice/crio-973ac6c355e955769dc00d34fc399f1485ae128722cf2a9306a6740bb72dce7c WatchSource:0}: Error finding container 973ac6c355e955769dc00d34fc399f1485ae128722cf2a9306a6740bb72dce7c: Status 404 returned error can't find the container with id 973ac6c355e955769dc00d34fc399f1485ae128722cf2a9306a6740bb72dce7c Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.273988 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.275159 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.775137159 +0000 UTC m=+166.005483061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.277857 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7kx4q"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.279316 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-624tj"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.289435 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xrz86"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.302553 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.318130 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8v69v"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.345174 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9"] Jan 21 06:57:04 crc kubenswrapper[4893]: W0121 06:57:04.353789 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod793617a3_fd23_40b4_95f7_68e828f76816.slice/crio-d4a7f244d36898299bd3e4a7a4ba0da1befe728ab4d05648c0733555b8392abb WatchSource:0}: Error finding container d4a7f244d36898299bd3e4a7a4ba0da1befe728ab4d05648c0733555b8392abb: Status 404 returned error can't find the container with id d4a7f244d36898299bd3e4a7a4ba0da1befe728ab4d05648c0733555b8392abb Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.360331 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.368715 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-46khb"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.382116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.382834 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.882819329 +0000 UTC m=+166.113165231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.385037 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5"] Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.395547 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hmqwx"] Jan 21 06:57:04 crc kubenswrapper[4893]: W0121 06:57:04.446177 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37347f00_99de_4215_9d76_b5d4996b5cd4.slice/crio-7843f3bb853765d2b5d9c33b7ae9f2eaa75707f0d2df2e33836b95f39593e500 WatchSource:0}: Error finding container 7843f3bb853765d2b5d9c33b7ae9f2eaa75707f0d2df2e33836b95f39593e500: Status 404 returned error can't find the container with id 7843f3bb853765d2b5d9c33b7ae9f2eaa75707f0d2df2e33836b95f39593e500 Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.465894 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:04 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:04 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:04 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.465956 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.483186 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.483589 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:04.983572425 +0000 UTC m=+166.213918327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.588090 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.588115 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.589015 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.589292 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.089281442 +0000 UTC m=+166.319627344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.679379 4893 csr.go:261] certificate signing request csr-ptj5k is approved, waiting to be issued Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.692150 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.692526 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.192507408 +0000 UTC m=+166.422853310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.706919 4893 csr.go:257] certificate signing request csr-ptj5k is issued Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.795016 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:04 crc kubenswrapper[4893]: E0121 06:57:04.795381 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.295369292 +0000 UTC m=+166.525715194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.855282 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmqwx" event={"ID":"b10f09b0-7978-4ebf-a5f6-c99737710b3f","Type":"ContainerStarted","Data":"77decbd55da34f4dbc661aadb3d89510822d27531bb09d5bc335b565644ad467"} Jan 21 06:57:04 crc kubenswrapper[4893]: I0121 06:57:04.858226 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" event={"ID":"ae7cbcf6-fff8-49ac-9b9c-25cb136cbdd7","Type":"ContainerStarted","Data":"d98aeb4a78991ccadc2b4f69f11c798242337ed178412ff103a02a86724c878b"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.096937 4893 patch_prober.go:28] interesting pod/apiserver-76f77b778f-sn2tj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]log ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]etcd ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/max-in-flight-filter ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 06:57:05 crc kubenswrapper[4893]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 06:57:05 crc kubenswrapper[4893]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-startinformers ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 06:57:05 crc kubenswrapper[4893]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 06:57:05 crc kubenswrapper[4893]: livez check failed Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.097082 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" podUID="cdd5c076-53ec-47bd-9cc3-df75e06b4942" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.104237 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.104893 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.604827623 +0000 UTC m=+166.835173535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.170032 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n5fjj" podStartSLOduration=144.170013164 podStartE2EDuration="2m24.170013164s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:05.154131802 +0000 UTC m=+166.384477704" watchObservedRunningTime="2026-01-21 06:57:05.170013164 +0000 UTC m=+166.400359066" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.210454 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.211233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.211733 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.711714968 +0000 UTC m=+166.942060870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.220287 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gvjgx" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.314975 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.318075 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.818040684 +0000 UTC m=+167.048386596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.336246 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" event={"ID":"ebd2435f-03d5-4495-aec1-4118d79aec19","Type":"ContainerStarted","Data":"0e43e8123d21672175055698b133a4ec9e117e20f4b72ac1f95cc56705aec2fa"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.446595 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.447405 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:05.947363941 +0000 UTC m=+167.177709843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.521076 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:05 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:05 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:05 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.521132 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.595023 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" event={"ID":"79950cd5-1fde-4c05-8a15-8a1b2b745e28","Type":"ContainerStarted","Data":"c5a83bd9378de5b0fe9ae9d5d230ef66dbbaf391083f1ccf29b00b0547dbf7f9"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.606634 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" event={"ID":"793617a3-fd23-40b4-95f7-68e828f76816","Type":"ContainerStarted","Data":"d4a7f244d36898299bd3e4a7a4ba0da1befe728ab4d05648c0733555b8392abb"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.606997 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.607261 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.107241132 +0000 UTC m=+167.337587034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.632078 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.632120 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.710717 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.710986 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.210975145 +0000 UTC m=+167.441321047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.797890 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 06:52:04 +0000 UTC, rotation deadline is 2026-10-10 08:13:13.691556153 +0000 UTC Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.797966 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6289h16m7.893594346s for next certificate rotation Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.815421 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:05 crc kubenswrapper[4893]: E0121 06:57:05.815792 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.315778191 +0000 UTC m=+167.546124093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823786 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" event={"ID":"babcfbd6-7579-4d7a-9bbb-38b759d8b273","Type":"ContainerStarted","Data":"8314eef7617fdfbb25b3ea19f55b3040cbe46102cdbfb2618fcf60375bd86574"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" event={"ID":"babcfbd6-7579-4d7a-9bbb-38b759d8b273","Type":"ContainerStarted","Data":"79dd2996fcd94a837378e5aaf94fe79b752c57040e643878b26290f4366e3648"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823843 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerStarted","Data":"a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823860 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerStarted","Data":"c1823c7941c8e6ba1b0a835805d3325be4b6373eab873191f393e1fdcb260029"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823871 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" event={"ID":"49096e31-1633-4328-b7e2-a4e1d4391a5b","Type":"ContainerStarted","Data":"00cab8d5b53c410c9be0612a3f1ac68a3338d4dd829dbed52313fa8869abe8d5"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.823895 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.847306 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerStarted","Data":"6218074b9ae03f354de4cfcc6749275a4677d2f3ef928bd1e2056d67485f327e"} Jan 21 06:57:05 crc kubenswrapper[4893]: I0121 06:57:05.862863 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" event={"ID":"67a6e98a-88c0-4855-936c-09b7c6d33b40","Type":"ContainerStarted","Data":"7f3094d80a0ffc5a812defac2806ab286548e9b8fc0a5337cba7e3fdb0535493"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.886437 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" event={"ID":"c58a6a64-ed06-4f09-b2a6-a70569a308d7","Type":"ContainerStarted","Data":"5fc60051f8dc7b1d4a496b231eaeacc12265cb690329cf3a692a2cc0ed2a7c6d"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.919707 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.919786 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:05.926443 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.426410525 +0000 UTC m=+167.656756427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.938595 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hmf6q" podStartSLOduration=143.938577707 podStartE2EDuration="2m23.938577707s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:05.938128263 +0000 UTC m=+167.168474165" watchObservedRunningTime="2026-01-21 06:57:05.938577707 +0000 UTC m=+167.168923599" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.938979 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rvfqv" podStartSLOduration=144.93897375 podStartE2EDuration="2m24.93897375s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:05.690151854 +0000 UTC m=+166.920497756" watchObservedRunningTime="2026-01-21 06:57:05.93897375 +0000 UTC m=+167.169319652" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:05.942329 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8-metrics-certs\") pod \"network-metrics-daemon-rc5gb\" (UID: \"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8\") " pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.014657 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rc5gb" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.020965 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.021399 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.521381935 +0000 UTC m=+167.751727837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.028054 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" event={"ID":"d2298316-1d7b-4a7a-9813-170541b0e9d3","Type":"ContainerStarted","Data":"750ca38c5293e2f992bcd4fc2f766458746b13338782a42cdba8e312c36c6001"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.028106 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" event={"ID":"d2298316-1d7b-4a7a-9813-170541b0e9d3","Type":"ContainerStarted","Data":"4deea9a47cd735dbd7311c34530becc84ebfed8d3d1db6a1d88b78583e0c3feb"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.034632 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" podStartSLOduration=144.034605011 podStartE2EDuration="2m24.034605011s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.03238512 +0000 UTC m=+167.262731022" watchObservedRunningTime="2026-01-21 06:57:06.034605011 +0000 UTC m=+167.264950913" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.058480 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" event={"ID":"7b2fc626-d06a-4f0c-ad8c-931c6019a06a","Type":"ContainerStarted","Data":"973ac6c355e955769dc00d34fc399f1485ae128722cf2a9306a6740bb72dce7c"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.060451 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-225db" event={"ID":"3ab99a27-e16e-4f7b-a745-f478dd109a5c","Type":"ContainerStarted","Data":"f957701ef4012c1b8a3dc90d0207cf36714d73e84b09ace6d30bd062febcfdbf"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.141504 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.142877 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.6428597 +0000 UTC m=+167.873205672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.148349 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" event={"ID":"c002ad61-0d90-47ff-8bc5-58826a3189d4","Type":"ContainerStarted","Data":"f579674cec1895260810e4e507f0b03df88f89af1f4cc14596a793967c16dc98"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.244757 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.245778 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.745758145 +0000 UTC m=+167.976104047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.262698 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-74trw" podStartSLOduration=145.26266065 podStartE2EDuration="2m25.26266065s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.155883109 +0000 UTC m=+167.386229031" watchObservedRunningTime="2026-01-21 06:57:06.26266065 +0000 UTC m=+167.493006552" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.264228 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" event={"ID":"33025f21-a66c-4bb2-809e-8de12fe71694","Type":"ContainerStarted","Data":"5d5492cc14a72c7d8f8f299e3b650a5a6c8f09cdf16cacc3ef06b50605598fb9"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.270192 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-46khb" event={"ID":"37347f00-99de-4215-9d76-b5d4996b5cd4","Type":"ContainerStarted","Data":"7843f3bb853765d2b5d9c33b7ae9f2eaa75707f0d2df2e33836b95f39593e500"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.278151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" event={"ID":"2d466c6f-7f88-4a34-8e57-73b83db3e871","Type":"ContainerStarted","Data":"f3953c2d4536eaddedae11cdb004ab542c22c4c81e0c5fb273166887c65e7dc0"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.292153 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" event={"ID":"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30","Type":"ContainerStarted","Data":"9558ed25d6b6aedb5638c54ba7de3f82b2e04e972957abf6e9d08bb6d86643f9"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.293115 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" event={"ID":"08150f3e-0cfa-4c7d-b9af-0e2d288a7737","Type":"ContainerStarted","Data":"42e46b6940a23597bd5472a026f93ac15492f40e16618b48ba53c9412d674a07"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.293955 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" event={"ID":"54db3d0a-b7a6-43db-a4a1-a9f363d0de87","Type":"ContainerStarted","Data":"6851a4c386fa8c5ff9da7acca3bb6898e9999fb758e95f3949eff594964226c2"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.294753 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" event={"ID":"d3add700-459a-4629-a5b1-efe434327719","Type":"ContainerStarted","Data":"b7b0ce21bfcf87d82d14281b21349300d830d1c232859cbb36288c3dee4c38df"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.295568 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7kx4q" event={"ID":"d0748253-379d-4453-84cc-e9e8a9298217","Type":"ContainerStarted","Data":"cf1ec6af6c52ec49833efa6ca8aafe05b38cb20fbd3d75cb02569626b8b9fd96"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.296218 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" event={"ID":"c1c13fcc-9ed6-4129-afc2-4f9d53716929","Type":"ContainerStarted","Data":"a7354b0b43840f5772516c524627b783e597637511e8074240aab1d84f38a3c0"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.297232 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" event={"ID":"ff046ea4-caba-480a-8242-eb099a1f136e","Type":"ContainerStarted","Data":"80ee5b060c65bd0ed034f8bd385b55c48a441360bde3e6494a12853c1a275ff2"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.299272 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" event={"ID":"118d8602-b5ce-4a7c-bf0c-17d74ce7ebda","Type":"ContainerStarted","Data":"0230751d4aaa408e79397d4cbee8c335657165bf3acfb8fd959b9c3bc58281c0"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.300639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" event={"ID":"13709215-5a7f-4c5d-aa52-749e06e40842","Type":"ContainerStarted","Data":"8eba2a657eaa9e0015ccec02ff967ce52621430898cc762cf6314ccc7b9b82b9"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.302738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" event={"ID":"04581422-2f1a-4d3c-9e82-8f80435f6ece","Type":"ContainerStarted","Data":"b2aaf608523b51734b010ffa6b2e4aab98ec8fa935369e17c4447a474f48352e"} Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.349059 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.350404 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.850381556 +0000 UTC m=+168.080727458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.368042 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zgm8x" podStartSLOduration=144.368020025 podStartE2EDuration="2m24.368020025s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.263208627 +0000 UTC m=+167.493554529" watchObservedRunningTime="2026-01-21 06:57:06.368020025 +0000 UTC m=+167.598365927" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.396012 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" podStartSLOduration=145.395985916 podStartE2EDuration="2m25.395985916s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.370139883 +0000 UTC m=+167.600485785" watchObservedRunningTime="2026-01-21 06:57:06.395985916 +0000 UTC m=+167.626331818" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.416790 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" podStartSLOduration=145.416762095 podStartE2EDuration="2m25.416762095s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.397171444 +0000 UTC m=+167.627517346" watchObservedRunningTime="2026-01-21 06:57:06.416762095 +0000 UTC m=+167.647107997" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.420494 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-p9bnb" podStartSLOduration=144.420473505 podStartE2EDuration="2m24.420473505s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.415782854 +0000 UTC m=+167.646128756" watchObservedRunningTime="2026-01-21 06:57:06.420473505 +0000 UTC m=+167.650819417" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.454391 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.455066 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:06.955047189 +0000 UTC m=+168.185393091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.460720 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:06 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:06 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:06 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.460769 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.461127 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcjxb" podStartSLOduration=145.461116284 podStartE2EDuration="2m25.461116284s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:06.460115032 +0000 UTC m=+167.690460934" watchObservedRunningTime="2026-01-21 06:57:06.461116284 +0000 UTC m=+167.691462186" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.566623 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.567438 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.067252344 +0000 UTC m=+168.297598246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.603365 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.603426 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.603985 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.604012 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.706961 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.707240 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.207217574 +0000 UTC m=+168.437563476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.808559 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.809404 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.309377766 +0000 UTC m=+168.539723668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.962235 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b57tt" Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.963015 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.963290 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.463272384 +0000 UTC m=+168.693618286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:06 crc kubenswrapper[4893]: I0121 06:57:06.963540 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:06 crc kubenswrapper[4893]: E0121 06:57:06.964165 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.464149153 +0000 UTC m=+168.694495065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.126481 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.126875 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.626853925 +0000 UTC m=+168.857199827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.289523 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.299783 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.790398455 +0000 UTC m=+169.020744357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.405957 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.406571 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.906525317 +0000 UTC m=+169.136871219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.407287 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.407732 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:07.907719195 +0000 UTC m=+169.138065097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.456747 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.498181 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:07 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:07 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:07 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.498259 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.551073 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.608581 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.108526146 +0000 UTC m=+169.338872058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.655449 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.656226 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.156209492 +0000 UTC m=+169.386555394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.770870 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.771526 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.271503647 +0000 UTC m=+169.501849549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.795783 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5mpmr" event={"ID":"c58a6a64-ed06-4f09-b2a6-a70569a308d7","Type":"ContainerStarted","Data":"e4f2c60cc71abb22ff25c15e94426479b01c88e445b601d422f62657c9030210"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.817239 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" event={"ID":"c1c13fcc-9ed6-4129-afc2-4f9d53716929","Type":"ContainerStarted","Data":"3c246c75d86ef9a2ede23d0e8b3ae985412890b8c39415b0aed9ee467f38f526"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.824366 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-225db" event={"ID":"3ab99a27-e16e-4f7b-a745-f478dd109a5c","Type":"ContainerStarted","Data":"af530582999e71efd0dbe80766b8e11a81f72746ff81a42014ecb3d5537b98ab"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.830496 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" event={"ID":"08150f3e-0cfa-4c7d-b9af-0e2d288a7737","Type":"ContainerStarted","Data":"f919e94beee280e21db77cee53d47dd4e9aa9fbadbfb73716772152c1617f50d"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.840149 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x4pxc" event={"ID":"04581422-2f1a-4d3c-9e82-8f80435f6ece","Type":"ContainerStarted","Data":"7f9aae0564e49b611a363e4e84b8be4bb962a0ded5ec91c9955defe1df9464e1"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.858383 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" event={"ID":"79950cd5-1fde-4c05-8a15-8a1b2b745e28","Type":"ContainerStarted","Data":"1b9733fadb69525ef465edf5e19becebf5483698fa0ee1067afe62fbdb835f91"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.858783 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.861132 4893 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6zl8m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.861257 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" podUID="79950cd5-1fde-4c05-8a15-8a1b2b745e28" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.873965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.875772 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.375754916 +0000 UTC m=+169.606100818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.882931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" event={"ID":"d3add700-459a-4629-a5b1-efe434327719","Type":"ContainerStarted","Data":"2a0a325fe5951449524e4b7ef6a10904ed2a6fe4fbdb87a991a7f76f4d511697"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.883926 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.886402 4893 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rhprd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.886475 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" podUID="d3add700-459a-4629-a5b1-efe434327719" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.908010 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-225db" podStartSLOduration=146.907972104 podStartE2EDuration="2m26.907972104s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:07.899598344 +0000 UTC m=+169.129944246" watchObservedRunningTime="2026-01-21 06:57:07.907972104 +0000 UTC m=+169.138318016" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.908959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" event={"ID":"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30","Type":"ContainerStarted","Data":"ebbe1fcab6637be862191dec2d3774a7e8e07fe456511deb0502bc7c3d7ceeb7"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.918847 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" event={"ID":"ebd2435f-03d5-4495-aec1-4118d79aec19","Type":"ContainerStarted","Data":"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.919570 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.920692 4893 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q7qn6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.920757 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.932846 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" event={"ID":"7b2fc626-d06a-4f0c-ad8c-931c6019a06a","Type":"ContainerStarted","Data":"7f8d046809cd11b8fde27fa81632d720d7af96eb9ddf7f336f71037277e6f5af"} Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.935512 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.935562 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.966012 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" podStartSLOduration=145.965990594 podStartE2EDuration="2m25.965990594s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:07.953090818 +0000 UTC m=+169.183436720" watchObservedRunningTime="2026-01-21 06:57:07.965990594 +0000 UTC m=+169.196336496" Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.967760 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rc5gb"] Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.977827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:07 crc kubenswrapper[4893]: E0121 06:57:07.979688 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.479638374 +0000 UTC m=+169.709984286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:07 crc kubenswrapper[4893]: I0121 06:57:07.986059 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" podStartSLOduration=145.98603978 podStartE2EDuration="2m25.98603978s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:07.984556922 +0000 UTC m=+169.214902824" watchObservedRunningTime="2026-01-21 06:57:07.98603978 +0000 UTC m=+169.216385682" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.126252 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.126932 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.626914899 +0000 UTC m=+169.857260801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.302992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.304051 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.804030426 +0000 UTC m=+170.034376328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.415374 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.415962 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:08.915944622 +0000 UTC m=+170.146290524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.522767 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.523168 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.023148206 +0000 UTC m=+170.253494108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.554016 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:08 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:08 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:08 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.554065 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.623906 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.624290 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.124275854 +0000 UTC m=+170.354621756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.736323 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.736508 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.23648103 +0000 UTC m=+170.466826932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.736844 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.737385 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.237360068 +0000 UTC m=+170.467705970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.892826 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.893054 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.393011694 +0000 UTC m=+170.623357616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.893408 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.893856 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.39384433 +0000 UTC m=+170.624190222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.939205 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" event={"ID":"33025f21-a66c-4bb2-809e-8de12fe71694","Type":"ContainerStarted","Data":"47541ba27e19cd15df844abdc6a3cb51d57fe14c88136f9b8c234e3b13146714"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.941610 4893 generic.go:334] "Generic (PLEG): container finished" podID="7b2fc626-d06a-4f0c-ad8c-931c6019a06a" containerID="7f8d046809cd11b8fde27fa81632d720d7af96eb9ddf7f336f71037277e6f5af" exitCode=0 Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.941664 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" event={"ID":"7b2fc626-d06a-4f0c-ad8c-931c6019a06a","Type":"ContainerDied","Data":"7f8d046809cd11b8fde27fa81632d720d7af96eb9ddf7f336f71037277e6f5af"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.943933 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmqwx" event={"ID":"b10f09b0-7978-4ebf-a5f6-c99737710b3f","Type":"ContainerStarted","Data":"d710c2f87a8a5134f2ca9674e7a256f34e9d7ab994fa49388438cc7ba2c00f0e"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.945131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" event={"ID":"793617a3-fd23-40b4-95f7-68e828f76816","Type":"ContainerStarted","Data":"845b80c55104747001560803a0e1ed000e63f0f16f001fa6fd646755307b80e8"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.946299 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-46khb" event={"ID":"37347f00-99de-4215-9d76-b5d4996b5cd4","Type":"ContainerStarted","Data":"b928c12648d67cd8c0fee7c23a2dbbdb7c22bd88c4c00f6428f556b8aa49c0c6"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.948572 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" event={"ID":"6b9f4d87-4d3b-47aa-a5cc-64167a6a0f30","Type":"ContainerStarted","Data":"c1a1a35b4cdc6085b6529f60abd269dd9d97dde51bf770c065e84f0f8d6a255c"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.950318 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" event={"ID":"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8","Type":"ContainerStarted","Data":"0ca6e7847d1f17ce92b820ef8be1d1abf9d878ae5a97b919f6b87733e4599b14"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.951495 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" event={"ID":"49096e31-1633-4328-b7e2-a4e1d4391a5b","Type":"ContainerStarted","Data":"6c7d6ec7df2faebe7373869027adb1d41e69e479dd4ffeb2a5117ca516664937"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.953880 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerStarted","Data":"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.956089 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" event={"ID":"2d466c6f-7f88-4a34-8e57-73b83db3e871","Type":"ContainerStarted","Data":"c461d015c3c98babba4352e9a814015a96552ea6bae253475faa726a8c7ed05a"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.958620 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" event={"ID":"54db3d0a-b7a6-43db-a4a1-a9f363d0de87","Type":"ContainerStarted","Data":"797be1e7d434907bdf5b0face87887daff41dce6295f1af6aef28bfc968b3622"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.958753 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.959954 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cl27x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.960007 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.968722 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7kx4q" event={"ID":"d0748253-379d-4453-84cc-e9e8a9298217","Type":"ContainerStarted","Data":"5ec2113dd164b97412112308315d6bdd9603275b6c7794ac73772dfd8c7a6b86"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.972389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" event={"ID":"babcfbd6-7579-4d7a-9bbb-38b759d8b273","Type":"ContainerStarted","Data":"85343f83d1c9ab5180c8e9ce37028430475491e369c60848f333d507d182002a"} Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979848 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979903 4893 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rhprd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979852 4893 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q7qn6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979947 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979859 4893 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6zl8m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979962 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" podUID="d3add700-459a-4629-a5b1-efe434327719" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979907 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.979981 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" podUID="79950cd5-1fde-4c05-8a15-8a1b2b745e28" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.994098 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.994319 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.494289557 +0000 UTC m=+170.724635459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:08 crc kubenswrapper[4893]: I0121 06:57:08.994382 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:08 crc kubenswrapper[4893]: E0121 06:57:08.994694 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.49468264 +0000 UTC m=+170.725028542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.046403 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" podStartSLOduration=148.046381205 podStartE2EDuration="2m28.046381205s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:08.511966976 +0000 UTC m=+169.742312888" watchObservedRunningTime="2026-01-21 06:57:09.046381205 +0000 UTC m=+170.276727107" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.169852 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.170057 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.67002649 +0000 UTC m=+170.900372392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.170386 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.171534 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.671523288 +0000 UTC m=+170.901869250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.175192 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xrz86" podStartSLOduration=147.175173375 podStartE2EDuration="2m27.175173375s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:09.050709115 +0000 UTC m=+170.281055017" watchObservedRunningTime="2026-01-21 06:57:09.175173375 +0000 UTC m=+170.405519277" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.239439 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podStartSLOduration=148.239410795 podStartE2EDuration="2m28.239410795s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:09.177962295 +0000 UTC m=+170.408308197" watchObservedRunningTime="2026-01-21 06:57:09.239410795 +0000 UTC m=+170.469756717" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.277666 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.278100 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.778026229 +0000 UTC m=+171.008372141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.291611 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.292733 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.792713143 +0000 UTC m=+171.023059035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.302194 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" podStartSLOduration=147.302167077 podStartE2EDuration="2m27.302167077s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:09.242592248 +0000 UTC m=+170.472938140" watchObservedRunningTime="2026-01-21 06:57:09.302167077 +0000 UTC m=+170.532512979" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.475845 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:09 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:09 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:09 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.475924 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.476501 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.476969 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:09.976945068 +0000 UTC m=+171.207290990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.624508 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.625095 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.125070891 +0000 UTC m=+171.355416793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.741753 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7kx4q" podStartSLOduration=15.74173354 podStartE2EDuration="15.74173354s" podCreationTimestamp="2026-01-21 06:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:09.303607304 +0000 UTC m=+170.533953216" watchObservedRunningTime="2026-01-21 06:57:09.74173354 +0000 UTC m=+170.972079442" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.745518 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.745994 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.245946695 +0000 UTC m=+171.476292587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.848634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:09 crc kubenswrapper[4893]: E0121 06:57:09.849410 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.349393969 +0000 UTC m=+171.579739871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.897487 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:57:09 crc kubenswrapper[4893]: I0121 06:57:09.913001 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-sn2tj" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:09.996402 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:09.996775 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.496756747 +0000 UTC m=+171.727102649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.102969 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.104123 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.604111356 +0000 UTC m=+171.834457258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.203842 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" event={"ID":"c1c13fcc-9ed6-4129-afc2-4f9d53716929","Type":"ContainerStarted","Data":"e79faebf682c6547db2d8fa28962ce449800cbc076fce340788e9a92bb07d43d"} Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.204774 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.205250 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.705233614 +0000 UTC m=+171.935579516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.343566 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.343935 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.843922081 +0000 UTC m=+172.074267983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.351084 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" event={"ID":"49096e31-1633-4328-b7e2-a4e1d4391a5b","Type":"ContainerStarted","Data":"fffe298f43e1f43a99a8edb89c25eb0445a050e9da4dd32c5a3dffd0cae06f62"} Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.363013 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" event={"ID":"7b2fc626-d06a-4f0c-ad8c-931c6019a06a","Type":"ContainerStarted","Data":"647fa918d513e315194013baf4273a5ad47d4ad89c4d46cb58b879fcf25456db"} Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.364892 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.379130 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" event={"ID":"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8","Type":"ContainerStarted","Data":"8bed82642eb24a10b47cbf77336f925fdb904f3f4c2459ee669afa6cd54bdb74"} Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.384845 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" event={"ID":"08150f3e-0cfa-4c7d-b9af-0e2d288a7737","Type":"ContainerStarted","Data":"fb00bf6fe560cb14ca6a1881985bd136d5e12e19ce0c5070561c75298babcb89"} Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.387077 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.387104 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.387155 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cl27x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.387180 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.387183 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.388151 4893 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rhprd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.388182 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" podUID="d3add700-459a-4629-a5b1-efe434327719" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.398390 4893 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q6lc9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.398462 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" podUID="2d466c6f-7f88-4a34-8e57-73b83db3e871" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.398568 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zpm9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.398585 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.446220 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.448200 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:10.948179133 +0000 UTC m=+172.178525035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.545017 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:10 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:10 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:10 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.545140 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.672350 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.679504 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.179489556 +0000 UTC m=+172.409835458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.719146 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" podStartSLOduration=149.719129093 podStartE2EDuration="2m29.719129093s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:10.718615176 +0000 UTC m=+171.948961078" watchObservedRunningTime="2026-01-21 06:57:10.719129093 +0000 UTC m=+171.949474995" Jan 21 06:57:10 crc kubenswrapper[4893]: I0121 06:57:10.806803 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:10 crc kubenswrapper[4893]: E0121 06:57:10.807095 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.307079487 +0000 UTC m=+172.537425389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.014726 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.015479 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.515466588 +0000 UTC m=+172.745812490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.116648 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.117193 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.617167199 +0000 UTC m=+172.847513101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.223249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.223591 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.723579117 +0000 UTC m=+172.953925019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.324117 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.324518 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.824499019 +0000 UTC m=+173.054844921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.385146 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-624tj" podStartSLOduration=150.385119032 podStartE2EDuration="2m30.385119032s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.127648487 +0000 UTC m=+172.357994389" watchObservedRunningTime="2026-01-21 06:57:11.385119032 +0000 UTC m=+172.615464934" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.387149 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.387991 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.404930 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.405243 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.407229 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" event={"ID":"33025f21-a66c-4bb2-809e-8de12fe71694","Type":"ContainerStarted","Data":"7fcf39d9e78e6916e5b20b20b90b9f081999c61aa52c8b030d5f33788d1ef4eb"} Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.407450 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zpm9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.407514 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.407963 4893 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q6lc9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.408006 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" podUID="2d466c6f-7f88-4a34-8e57-73b83db3e871" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.416155 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podStartSLOduration=149.416132192 podStartE2EDuration="2m29.416132192s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.413498797 +0000 UTC m=+172.643844719" watchObservedRunningTime="2026-01-21 06:57:11.416132192 +0000 UTC m=+172.646478094" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.426243 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.426285 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.426740 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:11.926722173 +0000 UTC m=+173.157068075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.436415 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-d7dhc" podStartSLOduration=149.436396995 podStartE2EDuration="2m29.436396995s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.432042244 +0000 UTC m=+172.662388146" watchObservedRunningTime="2026-01-21 06:57:11.436396995 +0000 UTC m=+172.666742897" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.459221 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:11 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:11 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:11 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.459283 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.527191 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.527398 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.527565 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.528605 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.028589685 +0000 UTC m=+173.258935587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.645644 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.645719 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.645827 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.645948 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.648210 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8v69v" podStartSLOduration=150.648190009 podStartE2EDuration="2m30.648190009s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.473948775 +0000 UTC m=+172.704294687" watchObservedRunningTime="2026-01-21 06:57:11.648190009 +0000 UTC m=+172.878535931" Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.654222 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.154199193 +0000 UTC m=+173.384545095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.789460 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.789773 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.289758471 +0000 UTC m=+173.520104373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.832346 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" podStartSLOduration=149.832332352 podStartE2EDuration="2m29.832332352s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.65350172 +0000 UTC m=+172.883847622" watchObservedRunningTime="2026-01-21 06:57:11.832332352 +0000 UTC m=+173.062678244" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.832991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.911874 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:11 crc kubenswrapper[4893]: E0121 06:57:11.912232 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.412217456 +0000 UTC m=+173.642563358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.927267 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bcdvv" podStartSLOduration=149.92723938 podStartE2EDuration="2m29.92723938s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:11.832193598 +0000 UTC m=+173.062539500" watchObservedRunningTime="2026-01-21 06:57:11.92723938 +0000 UTC m=+173.157585282" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.928456 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.928688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.929273 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:11 crc kubenswrapper[4893]: I0121 06:57:11.946126 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.113386 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:12 crc kubenswrapper[4893]: E0121 06:57:12.113828 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.613809902 +0000 UTC m=+173.844155804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.218992 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.219112 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:12 crc kubenswrapper[4893]: E0121 06:57:12.220143 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:12.720110707 +0000 UTC m=+173.950456609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.223511 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.253892 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-976q5" podStartSLOduration=150.253838534 podStartE2EDuration="2m30.253838534s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:12.251015493 +0000 UTC m=+173.481361395" watchObservedRunningTime="2026-01-21 06:57:12.253838534 +0000 UTC m=+173.484184466" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.283109 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.778311 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.778577 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.778632 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.778971 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:57:12 crc kubenswrapper[4893]: E0121 06:57:12.779200 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:13.779184462 +0000 UTC m=+175.009530364 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.779242 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.779448 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:12 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:12 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:12 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:12 crc kubenswrapper[4893]: I0121 06:57:12.779548 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.087741 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.088036 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:13.588021812 +0000 UTC m=+174.818367714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.204746 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.228464 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:13.728441217 +0000 UTC m=+174.958787119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.280493 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.503998 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.504547 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.504933 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.004900785 +0000 UTC m=+175.235246687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.509949 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:13 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 21 06:57:13 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:13 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.509998 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.635692 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.636592 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.136574917 +0000 UTC m=+175.366920819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.744366 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.745717 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.245691643 +0000 UTC m=+175.476037545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:13 crc kubenswrapper[4893]: I0121 06:57:13.846859 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:13 crc kubenswrapper[4893]: E0121 06:57:13.847286 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.347270146 +0000 UTC m=+175.577616058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.183143 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.183803 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.683783259 +0000 UTC m=+175.914129161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.293216 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.293692 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:14.79365653 +0000 UTC m=+176.024002432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.523288 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.523744 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.023728063 +0000 UTC m=+176.254073965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.523921 4893 patch_prober.go:28] interesting pod/console-f9d7485db-2k4nh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.523995 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2k4nh" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.656493 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.542483 4893 patch_prober.go:28] interesting pod/router-default-5444994796-fhrs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:57:14 crc kubenswrapper[4893]: [+]has-synced ok Jan 21 06:57:14 crc kubenswrapper[4893]: [+]process-running ok Jan 21 06:57:14 crc kubenswrapper[4893]: healthz check failed Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.656741 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fhrs8" podUID="96180be5-06e7-4b23-80ab-1cbf4e162e67" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.657031 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.157015257 +0000 UTC m=+176.387361159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.659094 4893 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-nz9cw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.659135 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" podUID="7b2fc626-d06a-4f0c-ad8c-931c6019a06a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.659224 4893 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-nz9cw container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.659241 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" podUID="7b2fc626-d06a-4f0c-ad8c-931c6019a06a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.678938 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmqwx" event={"ID":"b10f09b0-7978-4ebf-a5f6-c99737710b3f","Type":"ContainerStarted","Data":"6ebebe0cda995475f441993b1442bc73de0bb1d7fd0430cc06e6ffbe3ef6f862"} Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.679928 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hmqwx" Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.685503 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rc5gb" event={"ID":"e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8","Type":"ContainerStarted","Data":"44d7247a914ef73ee735b64a15d9a1b8a71aa28363eea11e73000e8fa60a4dd6"} Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.760398 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.760897 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.260874814 +0000 UTC m=+176.491220716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:14 crc kubenswrapper[4893]: I0121 06:57:14.862215 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:14 crc kubenswrapper[4893]: E0121 06:57:14.863387 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.363368327 +0000 UTC m=+176.593714279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.021157 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.021460 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.52143437 +0000 UTC m=+176.751780272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.021771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.022119 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.522105881 +0000 UTC m=+176.752451783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.129526 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.130426 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.630405511 +0000 UTC m=+176.860751403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.146972 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hmqwx" podStartSLOduration=22.146943624 podStartE2EDuration="22.146943624s" podCreationTimestamp="2026-01-21 06:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:14.877488422 +0000 UTC m=+176.107834324" watchObservedRunningTime="2026-01-21 06:57:15.146943624 +0000 UTC m=+176.377289526" Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.237551 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.499122 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:15.999094022 +0000 UTC m=+177.229439924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.499629 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.500163 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.000138776 +0000 UTC m=+177.230484688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.550040 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.607572 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.607952 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.107939275 +0000 UTC m=+177.338285177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.608385 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rc5gb" podStartSLOduration=154.608366299 podStartE2EDuration="2m34.608366299s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:15.148471983 +0000 UTC m=+176.378817895" watchObservedRunningTime="2026-01-21 06:57:15.608366299 +0000 UTC m=+176.838712201" Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.660759 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fhrs8" Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.708233 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.709622 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.209606197 +0000 UTC m=+177.439952099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.767215 4893 generic.go:334] "Generic (PLEG): container finished" podID="ff046ea4-caba-480a-8242-eb099a1f136e" containerID="80ee5b060c65bd0ed034f8bd385b55c48a441360bde3e6494a12853c1a275ff2" exitCode=0 Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.767573 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" event={"ID":"ff046ea4-caba-480a-8242-eb099a1f136e","Type":"ContainerDied","Data":"80ee5b060c65bd0ed034f8bd385b55c48a441360bde3e6494a12853c1a275ff2"} Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.809768 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-46khb" event={"ID":"37347f00-99de-4215-9d76-b5d4996b5cd4","Type":"ContainerStarted","Data":"a4c52a098dda626f556831d3b60ded56411b4cefd03f429a0846110b66fa800b"} Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.810786 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.811141 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.311128725 +0000 UTC m=+177.541474617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:15 crc kubenswrapper[4893]: I0121 06:57:15.912362 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:15 crc kubenswrapper[4893]: E0121 06:57:15.913096 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.413080676 +0000 UTC m=+177.643426568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.014149 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.014609 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.514593904 +0000 UTC m=+177.744939806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.115879 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.116292 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.616269497 +0000 UTC m=+177.846615399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.217857 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.218365 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.718343876 +0000 UTC m=+177.948689778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.319195 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.319402 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.819370821 +0000 UTC m=+178.049716723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.319516 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.319948 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.81993231 +0000 UTC m=+178.050278222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.374496 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.393466 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:57:16 crc kubenswrapper[4893]: W0121 06:57:16.400073 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb1f6fc9f_df01_4880_865b_38a593baaded.slice/crio-4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a WatchSource:0}: Error finding container 4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a: Status 404 returned error can't find the container with id 4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.421115 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.421814 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:16.921795182 +0000 UTC m=+178.152141084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.522287 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.522660 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.022647241 +0000 UTC m=+178.252993143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.559012 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.559019 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.559060 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.559075 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.597728 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.598516 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" containerID="cri-o://797be1e7d434907bdf5b0face87887daff41dce6295f1af6aef28bfc968b3622" gracePeriod=30 Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.613862 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cl27x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.613912 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.625745 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.626097 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.126080233 +0000 UTC m=+178.356426135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.750537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.750941 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.250926976 +0000 UTC m=+178.481272878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.852136 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.852473 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.352453267 +0000 UTC m=+178.582799169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.864536 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.864890 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" containerName="route-controller-manager" containerID="cri-o://dc3abd552ad32a1e9edf02b66ed29ec09a93edede5d608e16babeea15545928d" gracePeriod=30 Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.889816 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zpm9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.889852 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zpm9z container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.889865 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.889890 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b56a2761-7750-4e82-bc22-0cf39fed894a","Type":"ContainerStarted","Data":"d8f35dec837fa46ad4ad6cc7da2123abc8300af337413bf65e33b7ab4f5b839e"} Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.889910 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.907614 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-cl27x_54db3d0a-b7a6-43db-a4a1-a9f363d0de87/controller-manager/0.log" Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.907671 4893 generic.go:334] "Generic (PLEG): container finished" podID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerID="797be1e7d434907bdf5b0face87887daff41dce6295f1af6aef28bfc968b3622" exitCode=2 Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.907742 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" event={"ID":"54db3d0a-b7a6-43db-a4a1-a9f363d0de87","Type":"ContainerDied","Data":"797be1e7d434907bdf5b0face87887daff41dce6295f1af6aef28bfc968b3622"} Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.908636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b1f6fc9f-df01-4880-865b-38a593baaded","Type":"ContainerStarted","Data":"4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a"} Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.931151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-46khb" event={"ID":"37347f00-99de-4215-9d76-b5d4996b5cd4","Type":"ContainerStarted","Data":"bd1f64c6a834973a618dea2839e900535370aa31c2e56dbf44f3aed74093a8cd"} Jan 21 06:57:16 crc kubenswrapper[4893]: I0121 06:57:16.953648 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:16 crc kubenswrapper[4893]: E0121 06:57:16.963059 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.46302756 +0000 UTC m=+178.693373462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.056256 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.056922 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.556905425 +0000 UTC m=+178.787251317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.098227 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.159655 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.160068 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.660054429 +0000 UTC m=+178.890400331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.327592 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.327771 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.827741662 +0000 UTC m=+179.058087574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.328134 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.330391 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.830377057 +0000 UTC m=+179.060722959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.332246 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6zl8m" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.414220 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cl27x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.414287 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.432304 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.432702 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:17.932674323 +0000 UTC m=+179.163020225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.538489 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.538947 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.038922636 +0000 UTC m=+179.269268608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.614973 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhprd" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.645533 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.646308 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.146287306 +0000 UTC m=+179.376633208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.748787 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.749178 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.249163571 +0000 UTC m=+179.479509473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.760189 4893 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.828430 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.855626 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qldvk\" (UniqueName: \"kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk\") pod \"ff046ea4-caba-480a-8242-eb099a1f136e\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.855844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.855913 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume\") pod \"ff046ea4-caba-480a-8242-eb099a1f136e\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.855977 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume\") pod \"ff046ea4-caba-480a-8242-eb099a1f136e\" (UID: \"ff046ea4-caba-480a-8242-eb099a1f136e\") " Jan 21 06:57:17 crc kubenswrapper[4893]: E0121 06:57:17.856810 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.356779988 +0000 UTC m=+179.587125890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.857323 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff046ea4-caba-480a-8242-eb099a1f136e" (UID: "ff046ea4-caba-480a-8242-eb099a1f136e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.870899 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q6lc9" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.881932 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk" (OuterVolumeSpecName: "kube-api-access-qldvk") pod "ff046ea4-caba-480a-8242-eb099a1f136e" (UID: "ff046ea4-caba-480a-8242-eb099a1f136e"). InnerVolumeSpecName "kube-api-access-qldvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.882117 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff046ea4-caba-480a-8242-eb099a1f136e" (UID: "ff046ea4-caba-480a-8242-eb099a1f136e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.908863 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-cl27x_54db3d0a-b7a6-43db-a4a1-a9f363d0de87/controller-manager/0.log" Jan 21 06:57:17 crc kubenswrapper[4893]: I0121 06:57:17.908947 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.944048 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b1f6fc9f-df01-4880-865b-38a593baaded","Type":"ContainerStarted","Data":"cd3e5e343db06aeea12ad7ea5df78632a5f24df06da0ea20e8bb1943c508f8af"} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.945993 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b56a2761-7750-4e82-bc22-0cf39fed894a","Type":"ContainerStarted","Data":"0c578b0e3b1ea053d355794aed5ff31f439979fe50eb217b50d447030269487a"} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.947768 4893 generic.go:334] "Generic (PLEG): container finished" podID="52dc45a6-094c-4330-b824-0e46bd30416b" containerID="dc3abd552ad32a1e9edf02b66ed29ec09a93edede5d608e16babeea15545928d" exitCode=0 Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.947821 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" event={"ID":"52dc45a6-094c-4330-b824-0e46bd30416b","Type":"ContainerDied","Data":"dc3abd552ad32a1e9edf02b66ed29ec09a93edede5d608e16babeea15545928d"} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.957575 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca\") pod \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.957644 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.966871 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2" event={"ID":"ff046ea4-caba-480a-8242-eb099a1f136e","Type":"ContainerDied","Data":"b2256e98c2b16096c9a7209ec9b025bc25561fbc5762f443e94bc120159c0cd4"} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.966933 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2256e98c2b16096c9a7209ec9b025bc25561fbc5762f443e94bc120159c0cd4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.957648 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-976w8\" (UniqueName: \"kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8\") pod \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967007 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles\") pod \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967039 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config\") pod \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967065 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert\") pod \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\" (UID: \"54db3d0a-b7a6-43db-a4a1-a9f363d0de87\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967279 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967408 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qldvk\" (UniqueName: \"kubernetes.io/projected/ff046ea4-caba-480a-8242-eb099a1f136e-kube-api-access-qldvk\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967422 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff046ea4-caba-480a-8242-eb099a1f136e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.967443 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff046ea4-caba-480a-8242-eb099a1f136e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.968596 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "54db3d0a-b7a6-43db-a4a1-a9f363d0de87" (UID: "54db3d0a-b7a6-43db-a4a1-a9f363d0de87"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:17.968932 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.468920482 +0000 UTC m=+179.699266384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.969143 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config" (OuterVolumeSpecName: "config") pod "54db3d0a-b7a6-43db-a4a1-a9f363d0de87" (UID: "54db3d0a-b7a6-43db-a4a1-a9f363d0de87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:17.969179 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca" (OuterVolumeSpecName: "client-ca") pod "54db3d0a-b7a6-43db-a4a1-a9f363d0de87" (UID: "54db3d0a-b7a6-43db-a4a1-a9f363d0de87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081708 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-cl27x_54db3d0a-b7a6-43db-a4a1-a9f363d0de87/controller-manager/0.log" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081757 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081779 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" event={"ID":"54db3d0a-b7a6-43db-a4a1-a9f363d0de87","Type":"ContainerDied","Data":"6851a4c386fa8c5ff9da7acca3bb6898e9999fb758e95f3949eff594964226c2"} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081843 4893 scope.go:117] "RemoveContainer" containerID="797be1e7d434907bdf5b0face87887daff41dce6295f1af6aef28bfc968b3622" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081963 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081978 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081987 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.081957 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nz9cw" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.082008 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cl27x" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.082263 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.582104069 +0000 UTC m=+179.812450011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.122315 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8" (OuterVolumeSpecName: "kube-api-access-976w8") pod "54db3d0a-b7a6-43db-a4a1-a9f363d0de87" (UID: "54db3d0a-b7a6-43db-a4a1-a9f363d0de87"). InnerVolumeSpecName "kube-api-access-976w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.122992 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54db3d0a-b7a6-43db-a4a1-a9f363d0de87" (UID: "54db3d0a-b7a6-43db-a4a1-a9f363d0de87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.173593 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.173872 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.173886 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.173899 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff046ea4-caba-480a-8242-eb099a1f136e" containerName="collect-profiles" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.173905 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff046ea4-caba-480a-8242-eb099a1f136e" containerName="collect-profiles" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.174002 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" containerName="controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.174014 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff046ea4-caba-480a-8242-eb099a1f136e" containerName="collect-profiles" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.181331 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.183209 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.183317 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-976w8\" (UniqueName: \"kubernetes.io/projected/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-kube-api-access-976w8\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.183331 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54db3d0a-b7a6-43db-a4a1-a9f363d0de87-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.183671 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.683655951 +0000 UTC m=+179.914001853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.183754 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.184931 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.222796 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287220 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287320 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config\") pod \"52dc45a6-094c-4330-b824-0e46bd30416b\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287362 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca\") pod \"52dc45a6-094c-4330-b824-0e46bd30416b\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287395 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert\") pod \"52dc45a6-094c-4330-b824-0e46bd30416b\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287413 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r749q\" (UniqueName: \"kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q\") pod \"52dc45a6-094c-4330-b824-0e46bd30416b\" (UID: \"52dc45a6-094c-4330-b824-0e46bd30416b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287593 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqkx\" (UniqueName: \"kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287626 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.287695 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.288643 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.788625223 +0000 UTC m=+180.018971115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.289306 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config" (OuterVolumeSpecName: "config") pod "52dc45a6-094c-4330-b824-0e46bd30416b" (UID: "52dc45a6-094c-4330-b824-0e46bd30416b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.289744 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca" (OuterVolumeSpecName: "client-ca") pod "52dc45a6-094c-4330-b824-0e46bd30416b" (UID: "52dc45a6-094c-4330-b824-0e46bd30416b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.313408 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q" (OuterVolumeSpecName: "kube-api-access-r749q") pod "52dc45a6-094c-4330-b824-0e46bd30416b" (UID: "52dc45a6-094c-4330-b824-0e46bd30416b"). InnerVolumeSpecName "kube-api-access-r749q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.313494 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "52dc45a6-094c-4330-b824-0e46bd30416b" (UID: "52dc45a6-094c-4330-b824-0e46bd30416b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.329377 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.343962 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" containerName="route-controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.344225 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" containerName="route-controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.344395 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" containerName="route-controller-manager" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.345240 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.345499 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.367885 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=7.367866106 podStartE2EDuration="7.367866106s" podCreationTimestamp="2026-01-21 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:18.36611665 +0000 UTC m=+179.596462552" watchObservedRunningTime="2026-01-21 06:57:18.367866106 +0000 UTC m=+179.598212008" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.388537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmqkx\" (UniqueName: \"kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.390958 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.391228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.391352 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.391467 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzrm\" (UniqueName: \"kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.391564 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.391645 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.392057 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.392469 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r749q\" (UniqueName: \"kubernetes.io/projected/52dc45a6-094c-4330-b824-0e46bd30416b-kube-api-access-r749q\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.392546 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52dc45a6-094c-4330-b824-0e46bd30416b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.392622 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52dc45a6-094c-4330-b824-0e46bd30416b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.393702 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.394054 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.894036109 +0000 UTC m=+180.124382011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tz8g4" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.394274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.417452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmqkx\" (UniqueName: \"kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx\") pod \"community-operators-kjxh2\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.470153 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hmqwx" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.473445 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.474735 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.477817 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=8.477795888 podStartE2EDuration="8.477795888s" podCreationTimestamp="2026-01-21 06:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:18.469263653 +0000 UTC m=+179.699609555" watchObservedRunningTime="2026-01-21 06:57:18.477795888 +0000 UTC m=+179.708141800" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.479459 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495121 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495321 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzrm\" (UniqueName: \"kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495359 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495414 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzz84\" (UniqueName: \"kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495433 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.495521 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.496143 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 06:57:18 crc kubenswrapper[4893]: E0121 06:57:18.496269 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:57:18.996252513 +0000 UTC m=+180.226598415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.496991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.497381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.518814 4893 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T06:57:17.760227497Z","Handler":null,"Name":""} Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.519042 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.534796 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzrm\" (UniqueName: \"kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm\") pod \"community-operators-t86sj\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.550037 4893 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.550120 4893 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.590882 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.598733 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.598778 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzz84\" (UniqueName: \"kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.598799 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.607194 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.607534 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.610447 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.611672 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cl27x"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.643031 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.643068 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.650241 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzz84\" (UniqueName: \"kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84\") pod \"certified-operators-nq4z8\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.666517 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.667797 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.691084 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.694921 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.719586 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7jj\" (UniqueName: \"kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.719638 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.719683 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.787372 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tz8g4\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.832132 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.834493 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.834713 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws7jj\" (UniqueName: \"kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.834762 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.834805 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.835431 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.836193 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.867296 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws7jj\" (UniqueName: \"kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj\") pod \"certified-operators-8bgpm\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.911142 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.997620 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 06:57:18 crc kubenswrapper[4893]: I0121 06:57:18.998277 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.002809 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.131838 4893 generic.go:334] "Generic (PLEG): container finished" podID="b1f6fc9f-df01-4880-865b-38a593baaded" containerID="cd3e5e343db06aeea12ad7ea5df78632a5f24df06da0ea20e8bb1943c508f8af" exitCode=0 Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.131929 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b1f6fc9f-df01-4880-865b-38a593baaded","Type":"ContainerDied","Data":"cd3e5e343db06aeea12ad7ea5df78632a5f24df06da0ea20e8bb1943c508f8af"} Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.133070 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerStarted","Data":"977d7d579a1ff452b5a84436b7681f553b481ae57c16b7a9628faeb86c883a09"} Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.154755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-46khb" event={"ID":"37347f00-99de-4215-9d76-b5d4996b5cd4","Type":"ContainerStarted","Data":"c591f5c05370bbdfe819559187ee687dac2c49c8c6c0194cfec2ccde58664d3e"} Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.215937 4893 generic.go:334] "Generic (PLEG): container finished" podID="b56a2761-7750-4e82-bc22-0cf39fed894a" containerID="0c578b0e3b1ea053d355794aed5ff31f439979fe50eb217b50d447030269487a" exitCode=0 Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.216227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b56a2761-7750-4e82-bc22-0cf39fed894a","Type":"ContainerDied","Data":"0c578b0e3b1ea053d355794aed5ff31f439979fe50eb217b50d447030269487a"} Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.263400 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-46khb" podStartSLOduration=25.263374381 podStartE2EDuration="25.263374381s" podCreationTimestamp="2026-01-21 06:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:19.228364803 +0000 UTC m=+180.458710735" watchObservedRunningTime="2026-01-21 06:57:19.263374381 +0000 UTC m=+180.493720293" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.321092 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" event={"ID":"52dc45a6-094c-4330-b824-0e46bd30416b","Type":"ContainerDied","Data":"3330fa915537f2ae7ff945055be7e25078e09fbd49787e405de92b5c277cf6b0"} Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.321170 4893 scope.go:117] "RemoveContainer" containerID="dc3abd552ad32a1e9edf02b66ed29ec09a93edede5d608e16babeea15545928d" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.321277 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.368977 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.379714 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.383666 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zggq2"] Jan 21 06:57:19 crc kubenswrapper[4893]: W0121 06:57:19.384992 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-0a1082100ae5b224e283306d3631031ec6a52b5812d7d59a658cfe843a49374e WatchSource:0}: Error finding container 0a1082100ae5b224e283306d3631031ec6a52b5812d7d59a658cfe843a49374e: Status 404 returned error can't find the container with id 0a1082100ae5b224e283306d3631031ec6a52b5812d7d59a658cfe843a49374e Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.570460 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.595597 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dc45a6-094c-4330-b824-0e46bd30416b" path="/var/lib/kubelet/pods/52dc45a6-094c-4330-b824-0e46bd30416b/volumes" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.596570 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54db3d0a-b7a6-43db-a4a1-a9f363d0de87" path="/var/lib/kubelet/pods/54db3d0a-b7a6-43db-a4a1-a9f363d0de87/volumes" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.597287 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.727345 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.835789 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.836587 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.838475 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.839996 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.847629 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.847959 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.848502 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.848636 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.862667 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.863304 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.864875 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.866459 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.869828 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.870250 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.870364 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.870480 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.871025 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.871160 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.898726 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.961774 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.961838 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.961904 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.961960 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.961984 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pln\" (UniqueName: \"kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.962000 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4z6\" (UniqueName: \"kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.962020 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.962093 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.962120 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:19 crc kubenswrapper[4893]: I0121 06:57:19.973423 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 06:57:19 crc kubenswrapper[4893]: W0121 06:57:19.992439 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2514419f_4c60_442d_bbc7_0c9b8c765cc4.slice/crio-05b7e9594d77e484701f6552c95a1887450c91d0bc6dc8c7c49f49107bf7e3d2 WatchSource:0}: Error finding container 05b7e9594d77e484701f6552c95a1887450c91d0bc6dc8c7c49f49107bf7e3d2: Status 404 returned error can't find the container with id 05b7e9594d77e484701f6552c95a1887450c91d0bc6dc8c7c49f49107bf7e3d2 Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.063204 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.063979 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.064499 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.064610 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9pln\" (UniqueName: \"kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.064762 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4z6\" (UniqueName: \"kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.066110 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.067233 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.067515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.072104 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.072909 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.073020 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.073057 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.075404 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.075950 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.077801 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.077910 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.080297 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.082831 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.083003 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.089177 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.090468 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4z6\" (UniqueName: \"kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6\") pod \"route-controller-manager-86cbf9db7-dsxh8\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.109432 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9pln\" (UniqueName: \"kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln\") pod \"controller-manager-9b7bb658d-258gb\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.177908 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8m2\" (UniqueName: \"kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.177972 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.178016 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.316487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317096 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317223 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317231 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317420 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8m2\" (UniqueName: \"kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317816 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.317897 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.335632 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8m2\" (UniqueName: \"kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2\") pod \"redhat-marketplace-ztll7\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.337493 4893 generic.go:334] "Generic (PLEG): container finished" podID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerID="8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c" exitCode=0 Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.338293 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerDied","Data":"8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.338345 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerStarted","Data":"05b7e9594d77e484701f6552c95a1887450c91d0bc6dc8c7c49f49107bf7e3d2"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.341044 4893 generic.go:334] "Generic (PLEG): container finished" podID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerID="30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76" exitCode=0 Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.341094 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerDied","Data":"30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.342723 4893 generic.go:334] "Generic (PLEG): container finished" podID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerID="cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b" exitCode=0 Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.342807 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerDied","Data":"cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.342855 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerStarted","Data":"6eb4bb67ca6cefb84b37e59e556fc8149ec426a398efa3d5b3a2b590083169ec"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.360398 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.361305 4893 generic.go:334] "Generic (PLEG): container finished" podID="be44c297-715e-45f6-b165-244c39484f15" containerID="6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da" exitCode=0 Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.361342 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerDied","Data":"6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.361396 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerStarted","Data":"0a1082100ae5b224e283306d3631031ec6a52b5812d7d59a658cfe843a49374e"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.379447 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" event={"ID":"9b746e69-b4ab-4cba-8b09-7556ffc5cad9","Type":"ContainerStarted","Data":"31d0b4854664057301dbc351c8c92cb70030fff767801d2e5854c65cb929f25c"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.379490 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" event={"ID":"9b746e69-b4ab-4cba-8b09-7556ffc5cad9","Type":"ContainerStarted","Data":"ded4e50bb566b719ab3da7fe0fc4a081cd6a7921c312b644f9eb3647ed243dfb"} Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.479634 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" podStartSLOduration=159.479613209 podStartE2EDuration="2m39.479613209s" podCreationTimestamp="2026-01-21 06:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:20.477627665 +0000 UTC m=+181.707973577" watchObservedRunningTime="2026-01-21 06:57:20.479613209 +0000 UTC m=+181.709959121" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.481452 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.503823 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.516075 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.523248 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.639384 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.639754 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hks2h\" (UniqueName: \"kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.639804 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.741282 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.741344 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hks2h\" (UniqueName: \"kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.741379 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.742078 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.742739 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.773449 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hks2h\" (UniqueName: \"kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h\") pod \"redhat-marketplace-dpt49\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:20 crc kubenswrapper[4893]: I0121 06:57:20.867064 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.010579 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.022445 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.047019 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir\") pod \"b56a2761-7750-4e82-bc22-0cf39fed894a\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.047121 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access\") pod \"b56a2761-7750-4e82-bc22-0cf39fed894a\" (UID: \"b56a2761-7750-4e82-bc22-0cf39fed894a\") " Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.051286 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b56a2761-7750-4e82-bc22-0cf39fed894a" (UID: "b56a2761-7750-4e82-bc22-0cf39fed894a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.054441 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b56a2761-7750-4e82-bc22-0cf39fed894a" (UID: "b56a2761-7750-4e82-bc22-0cf39fed894a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.063461 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:21 crc kubenswrapper[4893]: W0121 06:57:21.079397 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaabf174c_2750_48af_8e68_1fa2f9f63965.slice/crio-44c329a4df4662874128b7cff27c5964aaf966842bf17cee55d8ec361a8b95e4 WatchSource:0}: Error finding container 44c329a4df4662874128b7cff27c5964aaf966842bf17cee55d8ec361a8b95e4: Status 404 returned error can't find the container with id 44c329a4df4662874128b7cff27c5964aaf966842bf17cee55d8ec361a8b95e4 Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.083643 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 06:57:21 crc kubenswrapper[4893]: W0121 06:57:21.139800 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a7ed86_0417_446d_aeaa_b71f6beb71ec.slice/crio-fcc37f8f0bb1485fdcae40afb01d0bc4bf2825933d35a803d55537499c3435e4 WatchSource:0}: Error finding container fcc37f8f0bb1485fdcae40afb01d0bc4bf2825933d35a803d55537499c3435e4: Status 404 returned error can't find the container with id fcc37f8f0bb1485fdcae40afb01d0bc4bf2825933d35a803d55537499c3435e4 Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.148107 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir\") pod \"b1f6fc9f-df01-4880-865b-38a593baaded\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.149406 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access\") pod \"b1f6fc9f-df01-4880-865b-38a593baaded\" (UID: \"b1f6fc9f-df01-4880-865b-38a593baaded\") " Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.148266 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1f6fc9f-df01-4880-865b-38a593baaded" (UID: "b1f6fc9f-df01-4880-865b-38a593baaded"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.151705 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b56a2761-7750-4e82-bc22-0cf39fed894a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.151740 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f6fc9f-df01-4880-865b-38a593baaded-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.151753 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56a2761-7750-4e82-bc22-0cf39fed894a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.156098 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1f6fc9f-df01-4880-865b-38a593baaded" (UID: "b1f6fc9f-df01-4880-865b-38a593baaded"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.189335 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.211570 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.253614 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1f6fc9f-df01-4880-865b-38a593baaded-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:57:21 crc kubenswrapper[4893]: W0121 06:57:21.254063 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod234ee8a0_1a20_46d3_bec7_b0830c8e23ed.slice/crio-dc5cd12ac873b1d041037e9ab822f13a4fcf7885c9683bfdd003b757546cc7eb WatchSource:0}: Error finding container dc5cd12ac873b1d041037e9ab822f13a4fcf7885c9683bfdd003b757546cc7eb: Status 404 returned error can't find the container with id dc5cd12ac873b1d041037e9ab822f13a4fcf7885c9683bfdd003b757546cc7eb Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.273401 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 06:57:21 crc kubenswrapper[4893]: E0121 06:57:21.273627 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56a2761-7750-4e82-bc22-0cf39fed894a" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.273644 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56a2761-7750-4e82-bc22-0cf39fed894a" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: E0121 06:57:21.273672 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f6fc9f-df01-4880-865b-38a593baaded" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.273691 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f6fc9f-df01-4880-865b-38a593baaded" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.273809 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f6fc9f-df01-4880-865b-38a593baaded" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.273827 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b56a2761-7750-4e82-bc22-0cf39fed894a" containerName="pruner" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.274522 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.288443 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.300836 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.391390 4893 generic.go:334] "Generic (PLEG): container finished" podID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerID="09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec" exitCode=0 Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.391461 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerDied","Data":"09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.391500 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerStarted","Data":"fcc37f8f0bb1485fdcae40afb01d0bc4bf2825933d35a803d55537499c3435e4"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.395724 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" event={"ID":"aabf174c-2750-48af-8e68-1fa2f9f63965","Type":"ContainerStarted","Data":"11496f4aaf50214ac72ed944d36076b7e945de8304fcb51d87f3e1210f561139"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.395766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" event={"ID":"aabf174c-2750-48af-8e68-1fa2f9f63965","Type":"ContainerStarted","Data":"44c329a4df4662874128b7cff27c5964aaf966842bf17cee55d8ec361a8b95e4"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.397074 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.397645 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b1f6fc9f-df01-4880-865b-38a593baaded","Type":"ContainerDied","Data":"4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.397665 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f512ca1dd7af3bdf0f73fc03a3d7c253b79431253ee55e82199ea205ef4317a" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.397693 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.400804 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerStarted","Data":"ec40c58ec72ad1743bbfebb40d225141d438df25e9b3e0409a9ada86c347a67f"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.416710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b56a2761-7750-4e82-bc22-0cf39fed894a","Type":"ContainerDied","Data":"d8f35dec837fa46ad4ad6cc7da2123abc8300af337413bf65e33b7ab4f5b839e"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.416793 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f35dec837fa46ad4ad6cc7da2123abc8300af337413bf65e33b7ab4f5b839e" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.416875 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.421962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" event={"ID":"234ee8a0-1a20-46d3-bec7-b0830c8e23ed","Type":"ContainerStarted","Data":"dc5cd12ac873b1d041037e9ab822f13a4fcf7885c9683bfdd003b757546cc7eb"} Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.422000 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.456016 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.456150 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfnk\" (UniqueName: \"kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.456237 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: E0121 06:57:21.471456 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a7ed86_0417_446d_aeaa_b71f6beb71ec.slice/crio-conmon-09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a7ed86_0417_446d_aeaa_b71f6beb71ec.slice/crio-09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podb56a2761_7750_4e82_bc22_0cf39fed894a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.557319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.557544 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.557584 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfnk\" (UniqueName: \"kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.561575 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.562963 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.612290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfnk\" (UniqueName: \"kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk\") pod \"redhat-operators-gv7xc\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.616859 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.685216 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" podStartSLOduration=3.685170074 podStartE2EDuration="3.685170074s" podCreationTimestamp="2026-01-21 06:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:21.449251801 +0000 UTC m=+182.679597713" watchObservedRunningTime="2026-01-21 06:57:21.685170074 +0000 UTC m=+182.915515976" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.686834 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.688377 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.743896 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.889027 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.889103 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.889144 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2v2\" (UniqueName: \"kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.944924 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.990400 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.990485 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd2v2\" (UniqueName: \"kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.990534 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.991433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:21 crc kubenswrapper[4893]: I0121 06:57:21.994934 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.029179 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd2v2\" (UniqueName: \"kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2\") pod \"redhat-operators-4s5jn\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.322414 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.555916 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" event={"ID":"234ee8a0-1a20-46d3-bec7-b0830c8e23ed","Type":"ContainerStarted","Data":"fe09e6df3fd927a6e5073911c8cb3c1b58d77d4749f46b773b2685818ec5df72"} Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.559202 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.575573 4893 generic.go:334] "Generic (PLEG): container finished" podID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerID="19fa4571bcf0da674aec8fba766ee789dfa7efd01ea3b63433febf79eb05ba29" exitCode=0 Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.575947 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerDied","Data":"19fa4571bcf0da674aec8fba766ee789dfa7efd01ea3b63433febf79eb05ba29"} Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.582800 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.583834 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" podStartSLOduration=4.583793899 podStartE2EDuration="4.583793899s" podCreationTimestamp="2026-01-21 06:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:57:22.58103345 +0000 UTC m=+183.811379362" watchObservedRunningTime="2026-01-21 06:57:22.583793899 +0000 UTC m=+183.814139811" Jan 21 06:57:22 crc kubenswrapper[4893]: I0121 06:57:22.638731 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 06:57:23 crc kubenswrapper[4893]: I0121 06:57:23.007922 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 06:57:23 crc kubenswrapper[4893]: I0121 06:57:23.747441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerStarted","Data":"9050c610c8fcde67db812c93e91008cb203c9b493de5d90545180018c8c95956"} Jan 21 06:57:23 crc kubenswrapper[4893]: I0121 06:57:23.750206 4893 generic.go:334] "Generic (PLEG): container finished" podID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerID="417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2" exitCode=0 Jan 21 06:57:23 crc kubenswrapper[4893]: I0121 06:57:23.751363 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerDied","Data":"417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2"} Jan 21 06:57:23 crc kubenswrapper[4893]: I0121 06:57:23.751404 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerStarted","Data":"c6befa06fdb34667b32ed65b175fb525e606f8df848fd0d37729f2545ed53686"} Jan 21 06:57:24 crc kubenswrapper[4893]: I0121 06:57:24.280345 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:57:24 crc kubenswrapper[4893]: I0121 06:57:24.291001 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 06:57:24 crc kubenswrapper[4893]: I0121 06:57:24.784147 4893 generic.go:334] "Generic (PLEG): container finished" podID="76395561-db8b-4fac-a5fd-14267030252a" containerID="0d372fcf8f1041071c54de18c59a9cd18168c4e9ea51543d5e2798771f13580e" exitCode=0 Jan 21 06:57:24 crc kubenswrapper[4893]: I0121 06:57:24.784759 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerDied","Data":"0d372fcf8f1041071c54de18c59a9cd18168c4e9ea51543d5e2798771f13580e"} Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.558483 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.558567 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.558599 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.558695 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.558766 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.559420 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68"} pod="openshift-console/downloads-7954f5f757-rvfqv" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.559577 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" containerID="cri-o://a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68" gracePeriod=2 Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.560047 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.560079 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:26 crc kubenswrapper[4893]: I0121 06:57:26.885721 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.857509 4893 generic.go:334] "Generic (PLEG): container finished" podID="5c435717-9f91-427d-ae9c-60db11c38d34" containerID="a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68" exitCode=0 Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.857770 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerDied","Data":"a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68"} Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.857962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerStarted","Data":"9d0e94b6db51978f469dff729897706174bdb4d1d770375a7834c4943f079b35"} Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.858952 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.859340 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:27 crc kubenswrapper[4893]: I0121 06:57:27.859402 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:28 crc kubenswrapper[4893]: I0121 06:57:28.657070 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:57:28 crc kubenswrapper[4893]: I0121 06:57:28.657124 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:57:28 crc kubenswrapper[4893]: I0121 06:57:28.798584 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:57:28 crc kubenswrapper[4893]: I0121 06:57:28.868822 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:28 crc kubenswrapper[4893]: I0121 06:57:28.868862 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:31 crc kubenswrapper[4893]: E0121 06:57:31.921072 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:57:36 crc kubenswrapper[4893]: I0121 06:57:36.552552 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:36 crc kubenswrapper[4893]: I0121 06:57:36.553167 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:36 crc kubenswrapper[4893]: I0121 06:57:36.552569 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:36 crc kubenswrapper[4893]: I0121 06:57:36.553503 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:38 crc kubenswrapper[4893]: I0121 06:57:38.933942 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 06:57:42 crc kubenswrapper[4893]: E0121 06:57:42.102697 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:57:46 crc kubenswrapper[4893]: I0121 06:57:46.552564 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:46 crc kubenswrapper[4893]: I0121 06:57:46.552712 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:46 crc kubenswrapper[4893]: I0121 06:57:46.553345 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:46 crc kubenswrapper[4893]: I0121 06:57:46.553265 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:46 crc kubenswrapper[4893]: I0121 06:57:46.628629 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-t7bmk" Jan 21 06:57:48 crc kubenswrapper[4893]: I0121 06:57:48.884347 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:57:48 crc kubenswrapper[4893]: I0121 06:57:48.885354 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:48 crc kubenswrapper[4893]: I0121 06:57:48.889431 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:57:48 crc kubenswrapper[4893]: I0121 06:57:48.889772 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:57:48 crc kubenswrapper[4893]: I0121 06:57:48.958866 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.034933 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.035031 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.136258 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.136353 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.136403 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.188209 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:49 crc kubenswrapper[4893]: I0121 06:57:49.207664 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:57:52 crc kubenswrapper[4893]: E0121 06:57:52.386043 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.363846 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.365313 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.373619 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.446606 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.446687 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.446764 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.548349 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.548487 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.548515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.548608 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.548605 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.566546 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access\") pod \"installer-9-crc\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:53 crc kubenswrapper[4893]: I0121 06:57:53.711487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.553492 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.553471 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.553920 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.553956 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.553995 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.554694 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.554776 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.554785 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9d0e94b6db51978f469dff729897706174bdb4d1d770375a7834c4943f079b35"} pod="openshift-console/downloads-7954f5f757-rvfqv" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 06:57:56 crc kubenswrapper[4893]: I0121 06:57:56.554891 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" containerID="cri-o://9d0e94b6db51978f469dff729897706174bdb4d1d770375a7834c4943f079b35" gracePeriod=2 Jan 21 06:57:58 crc kubenswrapper[4893]: I0121 06:57:58.656506 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:57:58 crc kubenswrapper[4893]: I0121 06:57:58.656583 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:57:58 crc kubenswrapper[4893]: I0121 06:57:58.656640 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 06:57:58 crc kubenswrapper[4893]: I0121 06:57:58.657329 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:57:58 crc kubenswrapper[4893]: I0121 06:57:58.657416 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75" gracePeriod=600 Jan 21 06:58:01 crc kubenswrapper[4893]: I0121 06:58:01.402961 4893 generic.go:334] "Generic (PLEG): container finished" podID="5c435717-9f91-427d-ae9c-60db11c38d34" containerID="9d0e94b6db51978f469dff729897706174bdb4d1d770375a7834c4943f079b35" exitCode=0 Jan 21 06:58:01 crc kubenswrapper[4893]: I0121 06:58:01.403018 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerDied","Data":"9d0e94b6db51978f469dff729897706174bdb4d1d770375a7834c4943f079b35"} Jan 21 06:58:01 crc kubenswrapper[4893]: I0121 06:58:01.403079 4893 scope.go:117] "RemoveContainer" containerID="a98b7211a541d712927867b7791c92820deedd5de706426b58e5e4c69cf83e68" Jan 21 06:58:01 crc kubenswrapper[4893]: I0121 06:58:01.409338 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75" exitCode=0 Jan 21 06:58:01 crc kubenswrapper[4893]: I0121 06:58:01.409388 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75"} Jan 21 06:58:02 crc kubenswrapper[4893]: E0121 06:58:02.502214 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:58:06 crc kubenswrapper[4893]: I0121 06:58:06.553108 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:06 crc kubenswrapper[4893]: I0121 06:58:06.553233 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:12 crc kubenswrapper[4893]: E0121 06:58:12.702467 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf92d48d9_4ed9_42bb_b811_a8f43bbac2cd.slice/crio-30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe44c297_715e_45f6_b165_244c39484f15.slice/crio-conmon-6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:58:12 crc kubenswrapper[4893]: E0121 06:58:12.753534 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:58:12 crc kubenswrapper[4893]: E0121 06:58:12.753881 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzz84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nq4z8_openshift-marketplace(582d8449-096d-4bfa-9dcc-9ef0b8661d50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:12 crc kubenswrapper[4893]: E0121 06:58:12.755116 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nq4z8" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" Jan 21 06:58:13 crc kubenswrapper[4893]: E0121 06:58:13.734199 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:58:13 crc kubenswrapper[4893]: E0121 06:58:13.734708 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws7jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8bgpm_openshift-marketplace(2514419f-4c60-442d-bbc7-0c9b8c765cc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:13 crc kubenswrapper[4893]: E0121 06:58:13.735891 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8bgpm" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" Jan 21 06:58:16 crc kubenswrapper[4893]: I0121 06:58:16.552428 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:16 crc kubenswrapper[4893]: I0121 06:58:16.552560 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:16 crc kubenswrapper[4893]: E0121 06:58:16.729253 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8bgpm" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" Jan 21 06:58:16 crc kubenswrapper[4893]: E0121 06:58:16.729456 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nq4z8" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" Jan 21 06:58:17 crc kubenswrapper[4893]: E0121 06:58:17.288003 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:58:17 crc kubenswrapper[4893]: E0121 06:58:17.288625 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kjxh2_openshift-marketplace(f92d48d9-4ed9-42bb-b811-a8f43bbac2cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:17 crc kubenswrapper[4893]: E0121 06:58:17.289883 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kjxh2" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.568423 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kjxh2" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.676600 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.678472 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd2v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4s5jn_openshift-marketplace(76395561-db8b-4fac-a5fd-14267030252a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.681391 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4s5jn" podUID="76395561-db8b-4fac-a5fd-14267030252a" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.681804 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.682081 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxfnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gv7xc_openshift-marketplace(15ac06c3-345b-4ced-8c19-2edf0c831b70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:23 crc kubenswrapper[4893]: E0121 06:58:23.683373 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gv7xc" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.645892 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4s5jn" podUID="76395561-db8b-4fac-a5fd-14267030252a" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.646037 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gv7xc" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.701304 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.701495 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc8m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ztll7_openshift-marketplace(78a7ed86-0417-446d-aeaa-b71f6beb71ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.703077 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ztll7" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.730898 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.731069 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hks2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dpt49_openshift-marketplace(58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.733977 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dpt49" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.740015 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.740174 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t86sj_openshift-marketplace(be44c297-715e-45f6-b165-244c39484f15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.741349 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t86sj" podUID="be44c297-715e-45f6-b165-244c39484f15" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.824019 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ztll7" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.824334 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dpt49" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" Jan 21 06:58:24 crc kubenswrapper[4893]: E0121 06:58:24.824434 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t86sj" podUID="be44c297-715e-45f6-b165-244c39484f15" Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.225625 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.320659 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:58:25 crc kubenswrapper[4893]: W0121 06:58:25.331709 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4d9545d1_113a_4985_ad77_2bd1bd45ec7d.slice/crio-d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe WatchSource:0}: Error finding container d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe: Status 404 returned error can't find the container with id d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.831641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.834936 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4d9545d1-113a-4985-ad77-2bd1bd45ec7d","Type":"ContainerStarted","Data":"d93630d10066cd2a841ae57f1d614dfe5e78959391a46ce0c36c0fddf4ad81bc"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.834995 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4d9545d1-113a-4985-ad77-2bd1bd45ec7d","Type":"ContainerStarted","Data":"d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.842270 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2ead6ee3-c721-4215-9f9e-0a55d68dcd85","Type":"ContainerStarted","Data":"70e9a916f6878b4a58004be76074e93423b939b92a13868813785c82ea575941"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.842322 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2ead6ee3-c721-4215-9f9e-0a55d68dcd85","Type":"ContainerStarted","Data":"7288d643c52b06bcd369d0ceec63a438d4860ed60cc93ce2efbeec03f185a38f"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.845158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rvfqv" event={"ID":"5c435717-9f91-427d-ae9c-60db11c38d34","Type":"ContainerStarted","Data":"141ab2e10fff9d4da32d46470a0776f903cfd138e7c7c7c1f88fbdb463613a24"} Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.845587 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.846038 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.846105 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.869989 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=37.869940918 podStartE2EDuration="37.869940918s" podCreationTimestamp="2026-01-21 06:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:58:25.86225429 +0000 UTC m=+247.092600212" watchObservedRunningTime="2026-01-21 06:58:25.869940918 +0000 UTC m=+247.100286820" Jan 21 06:58:25 crc kubenswrapper[4893]: I0121 06:58:25.880763 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=32.880743136 podStartE2EDuration="32.880743136s" podCreationTimestamp="2026-01-21 06:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:58:25.875488367 +0000 UTC m=+247.105834279" watchObservedRunningTime="2026-01-21 06:58:25.880743136 +0000 UTC m=+247.111089038" Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.552365 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.552758 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.552407 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.552880 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.853948 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ead6ee3-c721-4215-9f9e-0a55d68dcd85" containerID="70e9a916f6878b4a58004be76074e93423b939b92a13868813785c82ea575941" exitCode=0 Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.854012 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2ead6ee3-c721-4215-9f9e-0a55d68dcd85","Type":"ContainerDied","Data":"70e9a916f6878b4a58004be76074e93423b939b92a13868813785c82ea575941"} Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.854962 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-rvfqv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 21 06:58:26 crc kubenswrapper[4893]: I0121 06:58:26.855016 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rvfqv" podUID="5c435717-9f91-427d-ae9c-60db11c38d34" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.222784 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.296884 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir\") pod \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.296976 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access\") pod \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\" (UID: \"2ead6ee3-c721-4215-9f9e-0a55d68dcd85\") " Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.297047 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ead6ee3-c721-4215-9f9e-0a55d68dcd85" (UID: "2ead6ee3-c721-4215-9f9e-0a55d68dcd85"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.297345 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.307952 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ead6ee3-c721-4215-9f9e-0a55d68dcd85" (UID: "2ead6ee3-c721-4215-9f9e-0a55d68dcd85"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.398307 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ead6ee3-c721-4215-9f9e-0a55d68dcd85-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.890856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2ead6ee3-c721-4215-9f9e-0a55d68dcd85","Type":"ContainerDied","Data":"7288d643c52b06bcd369d0ceec63a438d4860ed60cc93ce2efbeec03f185a38f"} Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.890949 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7288d643c52b06bcd369d0ceec63a438d4860ed60cc93ce2efbeec03f185a38f" Jan 21 06:58:28 crc kubenswrapper[4893]: I0121 06:58:28.890986 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:58:34 crc kubenswrapper[4893]: I0121 06:58:34.927545 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerStarted","Data":"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea"} Jan 21 06:58:34 crc kubenswrapper[4893]: I0121 06:58:34.929723 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerStarted","Data":"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63"} Jan 21 06:58:36 crc kubenswrapper[4893]: I0121 06:58:36.557758 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rvfqv" Jan 21 06:58:36 crc kubenswrapper[4893]: I0121 06:58:36.941831 4893 generic.go:334] "Generic (PLEG): container finished" podID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerID="bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63" exitCode=0 Jan 21 06:58:36 crc kubenswrapper[4893]: I0121 06:58:36.941888 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerDied","Data":"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63"} Jan 21 06:58:37 crc kubenswrapper[4893]: I0121 06:58:37.953824 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerStarted","Data":"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba"} Jan 21 06:58:37 crc kubenswrapper[4893]: I0121 06:58:37.956659 4893 generic.go:334] "Generic (PLEG): container finished" podID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerID="adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea" exitCode=0 Jan 21 06:58:37 crc kubenswrapper[4893]: I0121 06:58:37.956728 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerDied","Data":"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea"} Jan 21 06:58:42 crc kubenswrapper[4893]: I0121 06:58:42.083624 4893 generic.go:334] "Generic (PLEG): container finished" podID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerID="8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba" exitCode=0 Jan 21 06:58:42 crc kubenswrapper[4893]: I0121 06:58:42.083701 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerDied","Data":"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba"} Jan 21 06:58:44 crc kubenswrapper[4893]: I0121 06:58:44.106159 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerStarted","Data":"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e"} Jan 21 06:58:44 crc kubenswrapper[4893]: I0121 06:58:44.131961 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nq4z8" podStartSLOduration=3.671073423 podStartE2EDuration="1m26.131835003s" podCreationTimestamp="2026-01-21 06:57:18 +0000 UTC" firstStartedPulling="2026-01-21 06:57:20.36011839 +0000 UTC m=+181.590464292" lastFinishedPulling="2026-01-21 06:58:42.82087997 +0000 UTC m=+264.051225872" observedRunningTime="2026-01-21 06:58:44.127300316 +0000 UTC m=+265.357646218" watchObservedRunningTime="2026-01-21 06:58:44.131835003 +0000 UTC m=+265.362180915" Jan 21 06:58:46 crc kubenswrapper[4893]: I0121 06:58:46.354904 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7qn6"] Jan 21 06:58:49 crc kubenswrapper[4893]: I0121 06:58:49.058131 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:58:49 crc kubenswrapper[4893]: I0121 06:58:49.058497 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:58:50 crc kubenswrapper[4893]: I0121 06:58:50.047178 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:58:50 crc kubenswrapper[4893]: I0121 06:58:50.151665 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.477450 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerStarted","Data":"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.484088 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerStarted","Data":"d71ce5950a0fe4a1a1362fb2324f67b0633d83a6caa802dd4384b515728c02da"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.487591 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerStarted","Data":"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.490240 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerStarted","Data":"89a6d80ca59e8ae507eb004e60365233070b672b6286b941ca70de352b6dbfa4"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.492576 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerStarted","Data":"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.497098 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerStarted","Data":"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.498710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerStarted","Data":"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f"} Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.618517 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.618714 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.676189 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bgpm" podStartSLOduration=5.204925422 podStartE2EDuration="1m43.676170972s" podCreationTimestamp="2026-01-21 06:57:18 +0000 UTC" firstStartedPulling="2026-01-21 06:57:20.360634666 +0000 UTC m=+181.590980568" lastFinishedPulling="2026-01-21 06:58:58.831880216 +0000 UTC m=+280.062226118" observedRunningTime="2026-01-21 06:59:01.675204073 +0000 UTC m=+282.905549995" watchObservedRunningTime="2026-01-21 06:59:01.676170972 +0000 UTC m=+282.906516874" Jan 21 06:59:01 crc kubenswrapper[4893]: I0121 06:59:01.798589 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gv7xc" podStartSLOduration=5.322426864 podStartE2EDuration="1m40.798566606s" podCreationTimestamp="2026-01-21 06:57:21 +0000 UTC" firstStartedPulling="2026-01-21 06:57:23.753171938 +0000 UTC m=+184.983517840" lastFinishedPulling="2026-01-21 06:58:59.22931169 +0000 UTC m=+280.459657582" observedRunningTime="2026-01-21 06:59:01.712796767 +0000 UTC m=+282.943142669" watchObservedRunningTime="2026-01-21 06:59:01.798566606 +0000 UTC m=+283.028912508" Jan 21 06:59:02 crc kubenswrapper[4893]: I0121 06:59:02.799995 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gv7xc" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="registry-server" probeResult="failure" output=< Jan 21 06:59:02 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 06:59:02 crc kubenswrapper[4893]: > Jan 21 06:59:03 crc kubenswrapper[4893]: E0121 06:59:03.560860 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58afbc98_0ff5_4eec_9ffb_3b9a1a8c6b06.slice/crio-d71ce5950a0fe4a1a1362fb2324f67b0633d83a6caa802dd4384b515728c02da.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686110 4893 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686465 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939" gracePeriod=15 Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686635 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972" gracePeriod=15 Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686690 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82" gracePeriod=15 Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686723 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c" gracePeriod=15 Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.686759 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9" gracePeriod=15 Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688060 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688325 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688337 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688347 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688353 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688364 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688370 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688379 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688384 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688393 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688398 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688408 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ead6ee3-c721-4215-9f9e-0a55d68dcd85" containerName="pruner" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688414 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ead6ee3-c721-4215-9f9e-0a55d68dcd85" containerName="pruner" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688422 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688427 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688442 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688449 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688544 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688555 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688563 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688572 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688579 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ead6ee3-c721-4215-9f9e-0a55d68dcd85" containerName="pruner" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688588 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688595 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:59:04 crc kubenswrapper[4893]: E0121 06:59:04.688695 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688702 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.688823 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.692458 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.693616 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.701160 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.817728 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818086 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818123 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818197 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818227 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.818308 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920048 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920416 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920541 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920247 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920489 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920803 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920849 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.920980 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921009 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921031 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921063 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921066 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921082 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921103 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921215 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:04 crc kubenswrapper[4893]: I0121 06:59:04.921298 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:05 crc kubenswrapper[4893]: E0121 06:59:05.034766 4893 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.035134 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:05 crc kubenswrapper[4893]: W0121 06:59:05.053249 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-13e3157a9b53dda7969f21f7b4551e52e20b48f3327fbdb05aa0d0efe8e40c0a WatchSource:0}: Error finding container 13e3157a9b53dda7969f21f7b4551e52e20b48f3327fbdb05aa0d0efe8e40c0a: Status 404 returned error can't find the container with id 13e3157a9b53dda7969f21f7b4551e52e20b48f3327fbdb05aa0d0efe8e40c0a Jan 21 06:59:05 crc kubenswrapper[4893]: E0121 06:59:05.057279 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188caccbd25121ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:59:05.056633259 +0000 UTC m=+286.286979161,LastTimestamp:2026-01-21 06:59:05.056633259 +0000 UTC m=+286.286979161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.524107 4893 generic.go:334] "Generic (PLEG): container finished" podID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerID="d71ce5950a0fe4a1a1362fb2324f67b0633d83a6caa802dd4384b515728c02da" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.524230 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerDied","Data":"d71ce5950a0fe4a1a1362fb2324f67b0633d83a6caa802dd4384b515728c02da"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.526406 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.527996 4893 generic.go:334] "Generic (PLEG): container finished" podID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" containerID="d93630d10066cd2a841ae57f1d614dfe5e78959391a46ce0c36c0fddf4ad81bc" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.528099 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4d9545d1-113a-4985-ad77-2bd1bd45ec7d","Type":"ContainerDied","Data":"d93630d10066cd2a841ae57f1d614dfe5e78959391a46ce0c36c0fddf4ad81bc"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.530397 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.531899 4893 generic.go:334] "Generic (PLEG): container finished" podID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerID="d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.531988 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerDied","Data":"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.533064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"13e3157a9b53dda7969f21f7b4551e52e20b48f3327fbdb05aa0d0efe8e40c0a"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.535379 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.537387 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.538281 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.539611 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544037 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544086 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544181 4893 scope.go:117] "RemoveContainer" containerID="e31f030f8032e8656211c0ab53c7528c816983b0bb8919acf30b94ed2a439711" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544413 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544557 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.544578 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9" exitCode=2 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.545448 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.551126 4893 generic.go:334] "Generic (PLEG): container finished" podID="be44c297-715e-45f6-b165-244c39484f15" containerID="f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.551463 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerDied","Data":"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.554733 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.555841 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.556942 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.557508 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.558487 4893 generic.go:334] "Generic (PLEG): container finished" podID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerID="ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da" exitCode=0 Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.558518 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerDied","Data":"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da"} Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.563910 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.564775 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.565629 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.566057 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:05 crc kubenswrapper[4893]: I0121 06:59:05.566583 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.154270 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aae73aa11d44b8831c829464aa5515a56a9a8ef17926d54a010e0e9215ecd643\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cd24673e95503ac856405941c96e75f11ca6da85fe80950e0dd00bb1062f9f47\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1166891762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.155277 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.155521 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.155760 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.155977 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.156068 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.594891 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d"} Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.595398 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.595731 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: E0121 06:59:06.595913 4893 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.596029 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.596316 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.596686 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:06 crc kubenswrapper[4893]: I0121 06:59:06.598427 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.301039 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.302446 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.303032 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.303349 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.303889 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.304251 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.398796 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir\") pod \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.398944 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d9545d1-113a-4985-ad77-2bd1bd45ec7d" (UID: "4d9545d1-113a-4985-ad77-2bd1bd45ec7d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.398992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access\") pod \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.399101 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock\") pod \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\" (UID: \"4d9545d1-113a-4985-ad77-2bd1bd45ec7d\") " Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.399167 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock" (OuterVolumeSpecName: "var-lock") pod "4d9545d1-113a-4985-ad77-2bd1bd45ec7d" (UID: "4d9545d1-113a-4985-ad77-2bd1bd45ec7d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.399635 4893 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.399661 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.404376 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d9545d1-113a-4985-ad77-2bd1bd45ec7d" (UID: "4d9545d1-113a-4985-ad77-2bd1bd45ec7d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.500684 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9545d1-113a-4985-ad77-2bd1bd45ec7d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.741398 4893 generic.go:334] "Generic (PLEG): container finished" podID="76395561-db8b-4fac-a5fd-14267030252a" containerID="89a6d80ca59e8ae507eb004e60365233070b672b6286b941ca70de352b6dbfa4" exitCode=0 Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.741435 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerDied","Data":"89a6d80ca59e8ae507eb004e60365233070b672b6286b941ca70de352b6dbfa4"} Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.741966 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.742230 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.742949 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.743151 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.743354 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.743638 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.748392 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.748719 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4d9545d1-113a-4985-ad77-2bd1bd45ec7d","Type":"ContainerDied","Data":"d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe"} Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.748750 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24fb36936dce95270727277a9d32bad727cfccdce001f41062adc7add13eebe" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.755848 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.756730 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.757540 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.757972 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.758340 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.758694 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.759763 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:59:07 crc kubenswrapper[4893]: I0121 06:59:07.760662 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939" exitCode=0 Jan 21 06:59:07 crc kubenswrapper[4893]: E0121 06:59:07.761185 4893 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.199076 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.200717 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.201460 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.209556 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.210251 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.210628 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.210952 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.216661 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.217957 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.422482 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.422689 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.422718 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.423051 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.423105 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.423127 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.524109 4893 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.524162 4893 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.524174 4893 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.800082 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.801168 4893 scope.go:117] "RemoveContainer" containerID="c2417cb0495ebd48a0bf9f8e46971fdbd70fd7e7c312741cead38fec69d1d972" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.801181 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.822011 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.822717 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.823171 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.823571 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.823989 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.824279 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.824561 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.825905 4893 scope.go:117] "RemoveContainer" containerID="baf70c5621061fc94a32901eb6f15a0d15b2ceba333d27cf88624bf9aa4ebe82" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.847278 4893 scope.go:117] "RemoveContainer" containerID="1f2a508699e746bc42337b9e10d1cb94b36eb53292a5ca91de2e8f03eb8f671c" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.911270 4893 scope.go:117] "RemoveContainer" containerID="cf06f9b5e844685f04ee12cbf239e285f1597f6a3c6444a4160596392905c4a9" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.940278 4893 scope.go:117] "RemoveContainer" containerID="46a82b561fe0d124a785d8417b0f810757464a5ccc70c032a46eb0a4ad932939" Jan 21 06:59:08 crc kubenswrapper[4893]: I0121 06:59:08.963943 4893 scope.go:117] "RemoveContainer" containerID="ea6b6283f3649f6063f4cc830b783dfa76935b376ab6feda1f354e3958526596" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.072692 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.073392 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.125469 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.125963 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.126242 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.126601 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.126877 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.127165 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.127373 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.127749 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.128000 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.583702 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.584520 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.584980 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.585314 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.585651 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.585967 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.586380 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.586694 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.591310 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.811334 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerStarted","Data":"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d"} Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.812127 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.812439 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.812713 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.813219 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.813594 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.814089 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.814460 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.814841 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerStarted","Data":"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874"} Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.815482 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.816043 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.816388 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.816542 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.816730 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.816896 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.817069 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.822212 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerStarted","Data":"0a7ab862873cb85dca16c1cbb25f2bf1b69edef599ad269c10ad7843a22fcc76"} Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.822930 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.823178 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.823618 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.823896 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.824403 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.824563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerStarted","Data":"d2df099c10c30f49e9d8e3efa06b7ab9d76e2ca1ddf24231ce626c559b91ae4c"} Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.824891 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.825601 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.826044 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.826222 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.827755 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.827982 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.828152 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.828368 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.828617 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerStarted","Data":"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d"} Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.828615 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.830578 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.830888 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.832348 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.833071 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.834801 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.834999 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.835197 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.879562 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.880287 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.880901 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.881346 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.881761 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.882291 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.882811 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:09 crc kubenswrapper[4893]: I0121 06:59:09.883110 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:10 crc kubenswrapper[4893]: I0121 06:59:10.889787 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:59:10 crc kubenswrapper[4893]: I0121 06:59:10.890167 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:59:10 crc kubenswrapper[4893]: I0121 06:59:10.890208 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:59:10 crc kubenswrapper[4893]: I0121 06:59:10.890219 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.774329 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" containerID="cri-o://34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0" gracePeriod=15 Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.830281 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.831358 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.831784 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.831957 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.832116 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.832272 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.832430 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.832623 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.832824 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.877893 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.878493 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.878724 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.878953 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.879145 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.879357 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.879550 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.879769 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.879997 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.893833 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.894008 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.894184 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.894336 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.894482 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.894511 4893 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 06:59:11 crc kubenswrapper[4893]: E0121 06:59:11.894641 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.948799 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dpt49" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="registry-server" probeResult="failure" output=< Jan 21 06:59:11 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 06:59:11 crc kubenswrapper[4893]: > Jan 21 06:59:11 crc kubenswrapper[4893]: I0121 06:59:11.949596 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ztll7" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="registry-server" probeResult="failure" output=< Jan 21 06:59:11 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 06:59:11 crc kubenswrapper[4893]: > Jan 21 06:59:12 crc kubenswrapper[4893]: E0121 06:59:12.095589 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.327840 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.462373 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:59:12 crc kubenswrapper[4893]: E0121 06:59:12.496736 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.546447 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.547308 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.547896 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.548331 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.548589 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.549376 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.549915 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.550188 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.550469 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.550787 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565157 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565296 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565329 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565380 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565441 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565531 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565619 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565648 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565713 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp8wg\" (UniqueName: \"kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565791 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565843 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.565877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566041 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566126 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login\") pod \"ebd2435f-03d5-4495-aec1-4118d79aec19\" (UID: \"ebd2435f-03d5-4495-aec1-4118d79aec19\") " Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566312 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566816 4893 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566857 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566921 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.566948 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.568881 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.572000 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.572641 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.572645 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.573186 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg" (OuterVolumeSpecName: "kube-api-access-zp8wg") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "kube-api-access-zp8wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.573470 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.574276 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.579685 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.584250 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.584664 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ebd2435f-03d5-4495-aec1-4118d79aec19" (UID: "ebd2435f-03d5-4495-aec1-4118d79aec19"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676335 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp8wg\" (UniqueName: \"kubernetes.io/projected/ebd2435f-03d5-4495-aec1-4118d79aec19-kube-api-access-zp8wg\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676405 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676419 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676432 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676447 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676459 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676469 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676482 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676503 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676525 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676549 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.676570 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ebd2435f-03d5-4495-aec1-4118d79aec19-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.919139 4893 generic.go:334] "Generic (PLEG): container finished" podID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerID="34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0" exitCode=0 Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.919241 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" event={"ID":"ebd2435f-03d5-4495-aec1-4118d79aec19","Type":"ContainerDied","Data":"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0"} Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.919267 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.919307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" event={"ID":"ebd2435f-03d5-4495-aec1-4118d79aec19","Type":"ContainerDied","Data":"0e43e8123d21672175055698b133a4ec9e117e20f4b72ac1f95cc56705aec2fa"} Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.919330 4893 scope.go:117] "RemoveContainer" containerID="34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.920180 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.920408 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.920754 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.921156 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.921466 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.921764 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.922004 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.922200 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.922378 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.936205 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.936829 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.936879 4893 scope.go:117] "RemoveContainer" containerID="34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.937473 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: E0121 06:59:12.937551 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0\": container with ID starting with 34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0 not found: ID does not exist" containerID="34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.937765 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0"} err="failed to get container status \"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0\": rpc error: code = NotFound desc = could not find container \"34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0\": container with ID starting with 34cbb8aac74ba397dd10684e693c1248620aa48316b2cbf05203729160336fa0 not found: ID does not exist" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.937974 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.938259 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.938619 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.938864 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.939178 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:12 crc kubenswrapper[4893]: I0121 06:59:12.939481 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:13 crc kubenswrapper[4893]: E0121 06:59:13.252870 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188caccbd25121ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:59:05.056633259 +0000 UTC m=+286.286979161,LastTimestamp:2026-01-21 06:59:05.056633259 +0000 UTC m=+286.286979161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:59:13 crc kubenswrapper[4893]: E0121 06:59:13.297437 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 21 06:59:13 crc kubenswrapper[4893]: I0121 06:59:13.525238 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4s5jn" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="registry-server" probeResult="failure" output=< Jan 21 06:59:13 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 06:59:13 crc kubenswrapper[4893]: > Jan 21 06:59:14 crc kubenswrapper[4893]: E0121 06:59:14.898366 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.305781 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:59:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aae73aa11d44b8831c829464aa5515a56a9a8ef17926d54a010e0e9215ecd643\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cd24673e95503ac856405941c96e75f11ca6da85fe80950e0dd00bb1062f9f47\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1166891762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.306852 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.307463 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.307810 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.308245 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:16 crc kubenswrapper[4893]: E0121 06:59:16.308282 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:59:18 crc kubenswrapper[4893]: E0121 06:59:18.100096 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="6.4s" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.519943 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.520081 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.572507 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.573219 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.573583 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.573846 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.574100 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.574397 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.574648 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.574916 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.575150 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.575406 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.581606 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.582438 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.582737 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.583032 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.583276 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.583685 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.584261 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.584504 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.584897 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.585147 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.621269 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.621323 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:18 crc kubenswrapper[4893]: E0121 06:59:18.621898 4893 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.622473 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:18 crc kubenswrapper[4893]: W0121 06:59:18.648326 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4f847a8d9ce0d49ae05ddc7b396248c55b6945cb4d740fbe4b8f31260eafc815 WatchSource:0}: Error finding container 4f847a8d9ce0d49ae05ddc7b396248c55b6945cb4d740fbe4b8f31260eafc815: Status 404 returned error can't find the container with id 4f847a8d9ce0d49ae05ddc7b396248c55b6945cb4d740fbe4b8f31260eafc815 Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.696376 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.696451 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.740364 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.741114 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.741455 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.741809 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.742046 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.742288 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.742497 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.742715 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.742968 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.743198 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:18 crc kubenswrapper[4893]: I0121 06:59:18.958200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f847a8d9ce0d49ae05ddc7b396248c55b6945cb4d740fbe4b8f31260eafc815"} Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.003144 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.004235 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.004690 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.005735 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.006474 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.006874 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.007238 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.007613 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.007889 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.008102 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t86sj" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.008308 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.009039 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.009770 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.010425 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.010840 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.011183 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.011490 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.011877 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.012201 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.012522 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.441969 4893 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.589057 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.589827 4893 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.590424 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.590936 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.591229 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.591580 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.593511 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.593905 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.594571 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.594966 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.968754 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.968817 4893 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d" exitCode=1 Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.968881 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d"} Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.969401 4893 scope.go:117] "RemoveContainer" containerID="90e698ff120a5858fa787a65c1bdaa3966dcb8974df9cbca40470f6ec58bca5d" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.969657 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.969971 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.970186 4893 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.970430 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.970791 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.970985 4893 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="91150784aedbf4e3189d3bacaccf0d2d40a0ee04f7a7e4e3cb324696414d5c9b" exitCode=0 Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.971111 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"91150784aedbf4e3189d3bacaccf0d2d40a0ee04f7a7e4e3cb324696414d5c9b"} Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.971604 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.971792 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.971852 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:19 crc kubenswrapper[4893]: E0121 06:59:19.972158 4893 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.972603 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.973436 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.974167 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.974458 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.974778 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.975364 4893 status_manager.go:851] "Failed to get status for pod" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.975822 4893 status_manager.go:851] "Failed to get status for pod" podUID="76395561-db8b-4fac-a5fd-14267030252a" pod="openshift-marketplace/redhat-operators-4s5jn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4s5jn\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.976744 4893 status_manager.go:851] "Failed to get status for pod" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" pod="openshift-marketplace/redhat-marketplace-dpt49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dpt49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.977509 4893 status_manager.go:851] "Failed to get status for pod" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" pod="openshift-marketplace/certified-operators-8bgpm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8bgpm\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.977851 4893 status_manager.go:851] "Failed to get status for pod" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" pod="openshift-authentication/oauth-openshift-558db77b4-q7qn6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7qn6\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.978330 4893 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.978806 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.979136 4893 status_manager.go:851] "Failed to get status for pod" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" pod="openshift-marketplace/redhat-marketplace-ztll7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ztll7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.979728 4893 status_manager.go:851] "Failed to get status for pod" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" pod="openshift-marketplace/community-operators-kjxh2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kjxh2\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.980135 4893 status_manager.go:851] "Failed to get status for pod" podUID="be44c297-715e-45f6-b165-244c39484f15" pod="openshift-marketplace/community-operators-t86sj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-t86sj\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:19 crc kubenswrapper[4893]: I0121 06:59:19.980438 4893 status_manager.go:851] "Failed to get status for pod" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" pod="openshift-marketplace/redhat-operators-gv7xc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gv7xc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 21 06:59:20 crc kubenswrapper[4893]: I0121 06:59:20.576406 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:59:20 crc kubenswrapper[4893]: I0121 06:59:20.625615 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.001699 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b71dfaa6cf7f9a8e818c0a07d40ec9043b5394338eecb26a17a1091c99a354f8"} Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.001757 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cf63e3d62664a250eb19b37f5d87fa1898df290ba43c0a0d3698a2cf6e2d8fa"} Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.001768 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d98be987bc420b9784ac84fbd2f6c95c87c038a504c33278a9748372161b2275"} Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.009003 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.009780 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"84a1056f1c7ced33d5a8d0bf42bf8551fe78c6bd96cd36235774ab607386e503"} Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.014104 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:59:21 crc kubenswrapper[4893]: I0121 06:59:21.067453 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.019781 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b0b8a1d8f56301115218086ec635aadff305baaaa6c5c8a776504633428ff10"} Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.020485 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f79266f17539bbd80c69018e30fb4c605b80380735d21e0974a9772e9e5e6c6a"} Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.020190 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.020654 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.364642 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:59:22 crc kubenswrapper[4893]: I0121 06:59:22.405632 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 06:59:23 crc kubenswrapper[4893]: I0121 06:59:23.622808 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:23 crc kubenswrapper[4893]: I0121 06:59:23.622889 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:23 crc kubenswrapper[4893]: I0121 06:59:23.630397 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:23 crc kubenswrapper[4893]: I0121 06:59:23.741363 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:59:25 crc kubenswrapper[4893]: I0121 06:59:25.506998 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:59:25 crc kubenswrapper[4893]: I0121 06:59:25.510522 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:59:27 crc kubenswrapper[4893]: I0121 06:59:27.044598 4893 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:27 crc kubenswrapper[4893]: I0121 06:59:27.530083 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1153df6-d678-47d6-a371-d1877e4ac820" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.149014 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.149053 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.149141 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.153470 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1153df6-d678-47d6-a371-d1877e4ac820" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.154514 4893 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d98be987bc420b9784ac84fbd2f6c95c87c038a504c33278a9748372161b2275" Jan 21 06:59:28 crc kubenswrapper[4893]: I0121 06:59:28.154536 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:29 crc kubenswrapper[4893]: I0121 06:59:29.153933 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:29 crc kubenswrapper[4893]: I0121 06:59:29.154257 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2101f59b-4610-4451-83eb-86fe80385cf2" Jan 21 06:59:29 crc kubenswrapper[4893]: I0121 06:59:29.157131 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1153df6-d678-47d6-a371-d1877e4ac820" Jan 21 06:59:33 crc kubenswrapper[4893]: I0121 06:59:33.746097 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:59:36 crc kubenswrapper[4893]: I0121 06:59:36.473064 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:59:36 crc kubenswrapper[4893]: I0121 06:59:36.702886 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:59:36 crc kubenswrapper[4893]: I0121 06:59:36.703588 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:59:36 crc kubenswrapper[4893]: I0121 06:59:36.934958 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:59:37 crc kubenswrapper[4893]: I0121 06:59:37.734199 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:59:37 crc kubenswrapper[4893]: I0121 06:59:37.786014 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.035096 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.076076 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.524361 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.656906 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.726579 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:59:38 crc kubenswrapper[4893]: I0121 06:59:38.925811 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.053033 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.320099 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.368876 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.400205 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.543325 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.612143 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.619128 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.731742 4893 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.762194 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.958593 4893 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:59:39 crc kubenswrapper[4893]: I0121 06:59:39.998458 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.057826 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.119971 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.215927 4893 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.282601 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.395497 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.487643 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.716277 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.950963 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.951046 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.951059 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.951571 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:59:40 crc kubenswrapper[4893]: I0121 06:59:40.987805 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.091040 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.140198 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.162550 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.207763 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.236784 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.248427 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.268456 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.338059 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.394516 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.440899 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.927248 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.939256 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.939406 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.939495 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.939270 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.939543 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.944982 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.945191 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.953044 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.953196 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.953259 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.957070 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.957314 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.957513 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.957767 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.969810 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:59:41 crc kubenswrapper[4893]: I0121 06:59:41.997478 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.039420 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.119355 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.248052 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.263100 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.278020 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.278418 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.448910 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.567787 4893 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.583312 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.589137 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.616481 4893 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.663149 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.671817 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.696948 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.699864 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.760544 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.801057 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.812907 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.837849 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.904925 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:59:42 crc kubenswrapper[4893]: I0121 06:59:42.919524 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.154423 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.162747 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.442600 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.442916 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.443238 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.443525 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.443584 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.528250 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.557506 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.578656 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.665788 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.692650 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.704388 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.768927 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.772977 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.805895 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.813812 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.831552 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.852183 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.855145 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.875484 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.893820 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.931980 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.937750 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:59:43 crc kubenswrapper[4893]: I0121 06:59:43.965773 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.001386 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.029163 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.067376 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.226896 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.291749 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.292376 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.367046 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.412574 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.444818 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.581557 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.597695 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.602279 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.674905 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.700258 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:59:44 crc kubenswrapper[4893]: I0121 06:59:44.765891 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.038960 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.065753 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.083370 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.125881 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.224014 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.265061 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.416327 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.481366 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.524630 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.555471 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.567925 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.642619 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.769391 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.775710 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.866433 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:59:45 crc kubenswrapper[4893]: I0121 06:59:45.949068 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.044635 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.163557 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.194402 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.466998 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.519792 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.519865 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.609317 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.626761 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:59:46 crc kubenswrapper[4893]: I0121 06:59:46.689506 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.135087 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.267619 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.326307 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.377786 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.440738 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.455504 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.576607 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.635307 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.670102 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.685315 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.808206 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:59:47 crc kubenswrapper[4893]: I0121 06:59:47.939017 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.008126 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.034407 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.035018 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.119341 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.196284 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.231929 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.290536 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.306275 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.481462 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.502919 4893 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.505257 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dpt49" podStartSLOduration=42.253603329 podStartE2EDuration="2m28.505207597s" podCreationTimestamp="2026-01-21 06:57:20 +0000 UTC" firstStartedPulling="2026-01-21 06:57:22.597899744 +0000 UTC m=+183.828245646" lastFinishedPulling="2026-01-21 06:59:08.849504012 +0000 UTC m=+290.079849914" observedRunningTime="2026-01-21 06:59:27.344040646 +0000 UTC m=+308.574386548" watchObservedRunningTime="2026-01-21 06:59:48.505207597 +0000 UTC m=+329.735553499" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.505905 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t86sj" podStartSLOduration=41.746169879 podStartE2EDuration="2m30.505895307s" podCreationTimestamp="2026-01-21 06:57:18 +0000 UTC" firstStartedPulling="2026-01-21 06:57:20.376310521 +0000 UTC m=+181.606656423" lastFinishedPulling="2026-01-21 06:59:09.136035949 +0000 UTC m=+290.366381851" observedRunningTime="2026-01-21 06:59:27.591075241 +0000 UTC m=+308.821421143" watchObservedRunningTime="2026-01-21 06:59:48.505895307 +0000 UTC m=+329.736241209" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.506598 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kjxh2" podStartSLOduration=43.047025991 podStartE2EDuration="2m31.506594878s" podCreationTimestamp="2026-01-21 06:57:17 +0000 UTC" firstStartedPulling="2026-01-21 06:57:20.367812287 +0000 UTC m=+181.598158189" lastFinishedPulling="2026-01-21 06:59:08.827381174 +0000 UTC m=+290.057727076" observedRunningTime="2026-01-21 06:59:27.577527562 +0000 UTC m=+308.807873464" watchObservedRunningTime="2026-01-21 06:59:48.506594878 +0000 UTC m=+329.736940780" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.507333 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4s5jn" podStartSLOduration=43.147967796 podStartE2EDuration="2m27.50732871s" podCreationTimestamp="2026-01-21 06:57:21 +0000 UTC" firstStartedPulling="2026-01-21 06:57:24.800270227 +0000 UTC m=+186.030616129" lastFinishedPulling="2026-01-21 06:59:09.159631141 +0000 UTC m=+290.389977043" observedRunningTime="2026-01-21 06:59:27.305454331 +0000 UTC m=+308.535800233" watchObservedRunningTime="2026-01-21 06:59:48.50732871 +0000 UTC m=+329.737674612" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.508086 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ztll7" podStartSLOduration=40.820507631 podStartE2EDuration="2m28.508081662s" podCreationTimestamp="2026-01-21 06:57:20 +0000 UTC" firstStartedPulling="2026-01-21 06:57:21.395417137 +0000 UTC m=+182.625763039" lastFinishedPulling="2026-01-21 06:59:09.082991168 +0000 UTC m=+290.313337070" observedRunningTime="2026-01-21 06:59:27.563338394 +0000 UTC m=+308.793684286" watchObservedRunningTime="2026-01-21 06:59:48.508081662 +0000 UTC m=+329.738427564" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.509483 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7qn6","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.509582 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.524648 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.537067 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.537048929 podStartE2EDuration="21.537048929s" podCreationTimestamp="2026-01-21 06:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:59:48.535966087 +0000 UTC m=+329.766311989" watchObservedRunningTime="2026-01-21 06:59:48.537048929 +0000 UTC m=+329.767394831" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.575466 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-dstsg"] Jan 21 06:59:48 crc kubenswrapper[4893]: E0121 06:59:48.576888 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.576915 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" Jan 21 06:59:48 crc kubenswrapper[4893]: E0121 06:59:48.576945 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" containerName="installer" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.576988 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" containerName="installer" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.577111 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9545d1-113a-4985-ad77-2bd1bd45ec7d" containerName="installer" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.577127 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" containerName="oauth-openshift" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.577697 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.587462 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.587742 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.588075 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.588251 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.588369 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.588446 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.590064 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.590252 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.591596 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.591748 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.591906 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.598289 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.603847 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.606136 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-dstsg"] Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.606914 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.613585 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.661714 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.706396 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.706718 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.706877 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.707094 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-dir\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.707981 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dx6\" (UniqueName: \"kubernetes.io/projected/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-kube-api-access-n7dx6\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708435 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-policies\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708586 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708744 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.708940 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.709162 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.709452 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.709726 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.756668 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.760147 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.811652 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812024 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812069 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812105 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-dir\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812138 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812162 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812183 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dx6\" (UniqueName: \"kubernetes.io/projected/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-kube-api-access-n7dx6\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812209 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-policies\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812342 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.812866 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-dir\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.814884 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-service-ca\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.815482 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-audit-policies\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.816034 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.816337 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.818991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-login\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.819320 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-error\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.819605 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.820028 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.820657 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-session\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.822409 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.824930 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.825506 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-v4-0-config-system-router-certs\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.831284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dx6\" (UniqueName: \"kubernetes.io/projected/8f59d2d8-6ec7-4f93-83b1-b0487a3f634b-kube-api-access-n7dx6\") pod \"oauth-openshift-dc8679f5f-dstsg\" (UID: \"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b\") " pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.835005 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.906071 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:48 crc kubenswrapper[4893]: I0121 06:59:48.994708 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.064938 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.093593 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.124181 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.124402 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.124581 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.126156 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.259507 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.297362 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.322644 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.340238 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-dc8679f5f-dstsg"] Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.463512 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.472557 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.477307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" event={"ID":"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b","Type":"ContainerStarted","Data":"0554eb96445b8b90fcbe1fba79f5d787de8eccb559b277e788dccf71ea6d8509"} Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.491697 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.510954 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.590368 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd2435f-03d5-4495-aec1-4118d79aec19" path="/var/lib/kubelet/pods/ebd2435f-03d5-4495-aec1-4118d79aec19/volumes" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.623142 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.626872 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.775869 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.819503 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.823538 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.878439 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.901264 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.942758 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:59:49 crc kubenswrapper[4893]: I0121 06:59:49.966394 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.057404 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.110033 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.166698 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.255820 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.278542 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.310427 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.331372 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.490424 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" event={"ID":"8f59d2d8-6ec7-4f93-83b1-b0487a3f634b","Type":"ContainerStarted","Data":"4268037769c2d17d8f6c0826692fe39254448851e6b9a24d87ac8da4e18831f7"} Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.490723 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.497573 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.520305 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-dc8679f5f-dstsg" podStartSLOduration=64.520288141 podStartE2EDuration="1m4.520288141s" podCreationTimestamp="2026-01-21 06:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:59:50.516926532 +0000 UTC m=+331.747272454" watchObservedRunningTime="2026-01-21 06:59:50.520288141 +0000 UTC m=+331.750634043" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.525726 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.659248 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.687038 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.780599 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:59:50 crc kubenswrapper[4893]: I0121 06:59:50.959086 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.005486 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.024579 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.025472 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.060161 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.140153 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.291616 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.329977 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.385408 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.459060 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.699808 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.784724 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:59:51 crc kubenswrapper[4893]: I0121 06:59:51.939876 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.088315 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.122456 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.383107 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.528875 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.554599 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.642287 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:59:52 crc kubenswrapper[4893]: I0121 06:59:52.808325 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:59:53 crc kubenswrapper[4893]: I0121 06:59:53.063664 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:59:53 crc kubenswrapper[4893]: I0121 06:59:53.309663 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:59:53 crc kubenswrapper[4893]: I0121 06:59:53.901507 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:59:54 crc kubenswrapper[4893]: I0121 06:59:54.181660 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:59:54 crc kubenswrapper[4893]: I0121 06:59:54.406071 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:59:59 crc kubenswrapper[4893]: I0121 06:59:59.824273 4893 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:59:59 crc kubenswrapper[4893]: I0121 06:59:59.825028 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d" gracePeriod=5 Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.178998 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx"] Jan 21 07:00:00 crc kubenswrapper[4893]: E0121 07:00:00.179296 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.179310 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.179409 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.180066 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.183112 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.184661 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.294972 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfghp\" (UniqueName: \"kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.295043 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.295090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.319620 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx"] Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.396454 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfghp\" (UniqueName: \"kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.396510 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.396573 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.397611 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.414896 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfghp\" (UniqueName: \"kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.416543 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume\") pod \"collect-profiles-29482980-ftdvx\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.613980 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.859638 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx"] Jan 21 07:00:00 crc kubenswrapper[4893]: I0121 07:00:00.958691 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" event={"ID":"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a","Type":"ContainerStarted","Data":"237063882e588db8895e569f8d5db5190ca7cb6081bef15c4730030f49e8e785"} Jan 21 07:00:01 crc kubenswrapper[4893]: I0121 07:00:01.966629 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" event={"ID":"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a","Type":"ContainerStarted","Data":"d916e4aae8348c93a9dfbeb353f2bdd036925fecaed4cc3991cf098b05d2dd3b"} Jan 21 07:00:01 crc kubenswrapper[4893]: I0121 07:00:01.986543 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" podStartSLOduration=1.986505196 podStartE2EDuration="1.986505196s" podCreationTimestamp="2026-01-21 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:00:01.984791095 +0000 UTC m=+343.215136997" watchObservedRunningTime="2026-01-21 07:00:01.986505196 +0000 UTC m=+343.216851098" Jan 21 07:00:02 crc kubenswrapper[4893]: I0121 07:00:02.154209 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 07:00:02 crc kubenswrapper[4893]: I0121 07:00:02.973246 4893 generic.go:334] "Generic (PLEG): container finished" podID="97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" containerID="d916e4aae8348c93a9dfbeb353f2bdd036925fecaed4cc3991cf098b05d2dd3b" exitCode=0 Jan 21 07:00:02 crc kubenswrapper[4893]: I0121 07:00:02.973315 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" event={"ID":"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a","Type":"ContainerDied","Data":"d916e4aae8348c93a9dfbeb353f2bdd036925fecaed4cc3991cf098b05d2dd3b"} Jan 21 07:00:04 crc kubenswrapper[4893]: I0121 07:00:04.985301 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" event={"ID":"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a","Type":"ContainerDied","Data":"237063882e588db8895e569f8d5db5190ca7cb6081bef15c4730030f49e8e785"} Jan 21 07:00:04 crc kubenswrapper[4893]: I0121 07:00:04.985749 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237063882e588db8895e569f8d5db5190ca7cb6081bef15c4730030f49e8e785" Jan 21 07:00:04 crc kubenswrapper[4893]: I0121 07:00:04.989014 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.092855 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfghp\" (UniqueName: \"kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp\") pod \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.092948 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume\") pod \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.093036 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume\") pod \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\" (UID: \"97b4e122-bd3c-47f5-b6bc-a00a090a1c3a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.093579 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume" (OuterVolumeSpecName: "config-volume") pod "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" (UID: "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.098383 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" (UID: "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.098514 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp" (OuterVolumeSpecName: "kube-api-access-cfghp") pod "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" (UID: "97b4e122-bd3c-47f5-b6bc-a00a090a1c3a"). InnerVolumeSpecName "kube-api-access-cfghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.221176 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfghp\" (UniqueName: \"kubernetes.io/projected/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-kube-api-access-cfghp\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.221229 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.221248 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.464004 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.464100 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.627779 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628298 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.627904 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628390 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628416 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628424 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628477 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628494 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628633 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628724 4893 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628737 4893 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.628745 4893 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.633828 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.730197 4893 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.730244 4893 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.992710 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.992775 4893 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d" exitCode=137 Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.992845 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.992874 4893 scope.go:117] "RemoveContainer" containerID="0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d" Jan 21 07:00:05 crc kubenswrapper[4893]: I0121 07:00:05.992857 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx" Jan 21 07:00:06 crc kubenswrapper[4893]: I0121 07:00:06.023390 4893 scope.go:117] "RemoveContainer" containerID="0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d" Jan 21 07:00:06 crc kubenswrapper[4893]: E0121 07:00:06.023909 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d\": container with ID starting with 0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d not found: ID does not exist" containerID="0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d" Jan 21 07:00:06 crc kubenswrapper[4893]: I0121 07:00:06.023970 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d"} err="failed to get container status \"0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d\": rpc error: code = NotFound desc = could not find container \"0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d\": container with ID starting with 0447c30816d5b944b293a2866b3e88c5f24d43f358d35ad369c11a3b01ace18d not found: ID does not exist" Jan 21 07:00:07 crc kubenswrapper[4893]: I0121 07:00:07.589076 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 07:00:12 crc kubenswrapper[4893]: I0121 07:00:12.557716 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 07:00:13 crc kubenswrapper[4893]: I0121 07:00:13.639570 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 07:00:18 crc kubenswrapper[4893]: I0121 07:00:18.063157 4893 generic.go:334] "Generic (PLEG): container finished" podID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerID="d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385" exitCode=0 Jan 21 07:00:18 crc kubenswrapper[4893]: I0121 07:00:18.063211 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerDied","Data":"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385"} Jan 21 07:00:18 crc kubenswrapper[4893]: I0121 07:00:18.065631 4893 scope.go:117] "RemoveContainer" containerID="d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385" Jan 21 07:00:19 crc kubenswrapper[4893]: I0121 07:00:19.073862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerStarted","Data":"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56"} Jan 21 07:00:19 crc kubenswrapper[4893]: I0121 07:00:19.075017 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 07:00:19 crc kubenswrapper[4893]: I0121 07:00:19.079869 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 07:00:22 crc kubenswrapper[4893]: I0121 07:00:22.627076 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 07:00:28 crc kubenswrapper[4893]: I0121 07:00:28.670269 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:00:28 crc kubenswrapper[4893]: I0121 07:00:28.670630 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:54 crc kubenswrapper[4893]: I0121 07:00:54.825242 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 07:00:54 crc kubenswrapper[4893]: I0121 07:00:54.826145 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" podUID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" containerName="controller-manager" containerID="cri-o://fe09e6df3fd927a6e5073911c8cb3c1b58d77d4749f46b773b2685818ec5df72" gracePeriod=30 Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.051278 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.051578 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" podUID="aabf174c-2750-48af-8e68-1fa2f9f63965" containerName="route-controller-manager" containerID="cri-o://11496f4aaf50214ac72ed944d36076b7e945de8304fcb51d87f3e1210f561139" gracePeriod=30 Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.302718 4893 generic.go:334] "Generic (PLEG): container finished" podID="aabf174c-2750-48af-8e68-1fa2f9f63965" containerID="11496f4aaf50214ac72ed944d36076b7e945de8304fcb51d87f3e1210f561139" exitCode=0 Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.303017 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" event={"ID":"aabf174c-2750-48af-8e68-1fa2f9f63965","Type":"ContainerDied","Data":"11496f4aaf50214ac72ed944d36076b7e945de8304fcb51d87f3e1210f561139"} Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.307226 4893 generic.go:334] "Generic (PLEG): container finished" podID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" containerID="fe09e6df3fd927a6e5073911c8cb3c1b58d77d4749f46b773b2685818ec5df72" exitCode=0 Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.307254 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" event={"ID":"234ee8a0-1a20-46d3-bec7-b0830c8e23ed","Type":"ContainerDied","Data":"fe09e6df3fd927a6e5073911c8cb3c1b58d77d4749f46b773b2685818ec5df72"} Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.444213 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.533043 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca\") pod \"aabf174c-2750-48af-8e68-1fa2f9f63965\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.533903 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca" (OuterVolumeSpecName: "client-ca") pod "aabf174c-2750-48af-8e68-1fa2f9f63965" (UID: "aabf174c-2750-48af-8e68-1fa2f9f63965"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.533934 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert\") pod \"aabf174c-2750-48af-8e68-1fa2f9f63965\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.533995 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk4z6\" (UniqueName: \"kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6\") pod \"aabf174c-2750-48af-8e68-1fa2f9f63965\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.534072 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config\") pod \"aabf174c-2750-48af-8e68-1fa2f9f63965\" (UID: \"aabf174c-2750-48af-8e68-1fa2f9f63965\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.534508 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.534940 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config" (OuterVolumeSpecName: "config") pod "aabf174c-2750-48af-8e68-1fa2f9f63965" (UID: "aabf174c-2750-48af-8e68-1fa2f9f63965"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.539514 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aabf174c-2750-48af-8e68-1fa2f9f63965" (UID: "aabf174c-2750-48af-8e68-1fa2f9f63965"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.542999 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6" (OuterVolumeSpecName: "kube-api-access-tk4z6") pod "aabf174c-2750-48af-8e68-1fa2f9f63965" (UID: "aabf174c-2750-48af-8e68-1fa2f9f63965"). InnerVolumeSpecName "kube-api-access-tk4z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.636273 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk4z6\" (UniqueName: \"kubernetes.io/projected/aabf174c-2750-48af-8e68-1fa2f9f63965-kube-api-access-tk4z6\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.636325 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabf174c-2750-48af-8e68-1fa2f9f63965-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.636340 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aabf174c-2750-48af-8e68-1fa2f9f63965-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.736291 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.838719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles\") pod \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.838792 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config\") pod \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.838823 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9pln\" (UniqueName: \"kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln\") pod \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.838896 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert\") pod \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.838965 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca\") pod \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\" (UID: \"234ee8a0-1a20-46d3-bec7-b0830c8e23ed\") " Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.839923 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "234ee8a0-1a20-46d3-bec7-b0830c8e23ed" (UID: "234ee8a0-1a20-46d3-bec7-b0830c8e23ed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.839906 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "234ee8a0-1a20-46d3-bec7-b0830c8e23ed" (UID: "234ee8a0-1a20-46d3-bec7-b0830c8e23ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.840027 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config" (OuterVolumeSpecName: "config") pod "234ee8a0-1a20-46d3-bec7-b0830c8e23ed" (UID: "234ee8a0-1a20-46d3-bec7-b0830c8e23ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.842905 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln" (OuterVolumeSpecName: "kube-api-access-j9pln") pod "234ee8a0-1a20-46d3-bec7-b0830c8e23ed" (UID: "234ee8a0-1a20-46d3-bec7-b0830c8e23ed"). InnerVolumeSpecName "kube-api-access-j9pln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.844488 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "234ee8a0-1a20-46d3-bec7-b0830c8e23ed" (UID: "234ee8a0-1a20-46d3-bec7-b0830c8e23ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.941035 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.941101 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.941125 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.941137 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9pln\" (UniqueName: \"kubernetes.io/projected/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-kube-api-access-j9pln\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:55 crc kubenswrapper[4893]: I0121 07:00:55.941149 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234ee8a0-1a20-46d3-bec7-b0830c8e23ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.315131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" event={"ID":"aabf174c-2750-48af-8e68-1fa2f9f63965","Type":"ContainerDied","Data":"44c329a4df4662874128b7cff27c5964aaf966842bf17cee55d8ec361a8b95e4"} Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.315225 4893 scope.go:117] "RemoveContainer" containerID="11496f4aaf50214ac72ed944d36076b7e945de8304fcb51d87f3e1210f561139" Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.315241 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8" Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.317935 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" event={"ID":"234ee8a0-1a20-46d3-bec7-b0830c8e23ed","Type":"ContainerDied","Data":"dc5cd12ac873b1d041037e9ab822f13a4fcf7885c9683bfdd003b757546cc7eb"} Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.318126 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b7bb658d-258gb" Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.360827 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.372018 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cbf9db7-dsxh8"] Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.377467 4893 scope.go:117] "RemoveContainer" containerID="fe09e6df3fd927a6e5073911c8cb3c1b58d77d4749f46b773b2685818ec5df72" Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.390754 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 07:00:56 crc kubenswrapper[4893]: I0121 07:00:56.399349 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9b7bb658d-258gb"] Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.159251 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:00:57 crc kubenswrapper[4893]: E0121 07:00:57.161274 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" containerName="controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.161556 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" containerName="controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: E0121 07:00:57.161916 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" containerName="collect-profiles" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.162194 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" containerName="collect-profiles" Jan 21 07:00:57 crc kubenswrapper[4893]: E0121 07:00:57.162385 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabf174c-2750-48af-8e68-1fa2f9f63965" containerName="route-controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.162558 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf174c-2750-48af-8e68-1fa2f9f63965" containerName="route-controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.163148 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" containerName="collect-profiles" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.163413 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" containerName="controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.163712 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabf174c-2750-48af-8e68-1fa2f9f63965" containerName="route-controller-manager" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.165263 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.166504 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.167400 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.173142 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.173189 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.173145 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.173142 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.173964 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.174030 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.174516 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.174624 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.175166 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.175210 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.175214 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.175815 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.184924 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.186960 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.194493 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360335 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360423 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360452 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360484 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360581 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zsd\" (UniqueName: \"kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360661 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2pg\" (UniqueName: \"kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360724 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.360773 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.461746 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2pg\" (UniqueName: \"kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.461812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.461888 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.461933 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.461975 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.462007 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.462051 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.462111 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zsd\" (UniqueName: \"kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.462193 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.463528 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.463896 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.463977 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.464283 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.464435 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.467624 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.467761 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.480126 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2pg\" (UniqueName: \"kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg\") pod \"route-controller-manager-dbb4dd9bc-btzt5\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.489809 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zsd\" (UniqueName: \"kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd\") pod \"controller-manager-6fc6b99b96-dg5hb\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.491331 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.505283 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.594133 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234ee8a0-1a20-46d3-bec7-b0830c8e23ed" path="/var/lib/kubelet/pods/234ee8a0-1a20-46d3-bec7-b0830c8e23ed/volumes" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.595466 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabf174c-2750-48af-8e68-1fa2f9f63965" path="/var/lib/kubelet/pods/aabf174c-2750-48af-8e68-1fa2f9f63965/volumes" Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.960777 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:00:57 crc kubenswrapper[4893]: I0121 07:00:57.969743 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.333103 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" event={"ID":"ae416462-7fdb-40bc-8fda-91eb26e2d538","Type":"ContainerStarted","Data":"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30"} Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.333573 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.333596 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" event={"ID":"ae416462-7fdb-40bc-8fda-91eb26e2d538","Type":"ContainerStarted","Data":"f0dd2af9c01e1d0b1d981fd26cd5a9398bf62d10ec516fca78619d24004252d6"} Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.334662 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" event={"ID":"b5557d7d-8cbe-4371-a58f-ebee3d46b285","Type":"ContainerStarted","Data":"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3"} Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.334724 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" event={"ID":"b5557d7d-8cbe-4371-a58f-ebee3d46b285","Type":"ContainerStarted","Data":"bf14152d7f224a0b39d5c553ab978b8eaf65cd1841d75d80cf84745ba5fcd7ee"} Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.335261 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.340086 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.356194 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" podStartSLOduration=3.35613615 podStartE2EDuration="3.35613615s" podCreationTimestamp="2026-01-21 07:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:00:58.353060348 +0000 UTC m=+399.583406250" watchObservedRunningTime="2026-01-21 07:00:58.35613615 +0000 UTC m=+399.586482062" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.376948 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" podStartSLOduration=3.376917607 podStartE2EDuration="3.376917607s" podCreationTimestamp="2026-01-21 07:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:00:58.375010536 +0000 UTC m=+399.605356438" watchObservedRunningTime="2026-01-21 07:00:58.376917607 +0000 UTC m=+399.607263509" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.432115 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.432377 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t86sj" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="registry-server" containerID="cri-o://52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" gracePeriod=2 Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.629047 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.629579 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bgpm" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="registry-server" containerID="cri-o://1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" gracePeriod=2 Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.656451 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.656525 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:58 crc kubenswrapper[4893]: E0121 07:00:58.697353 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d is running failed: container process not found" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:58 crc kubenswrapper[4893]: E0121 07:00:58.697984 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d is running failed: container process not found" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:58 crc kubenswrapper[4893]: E0121 07:00:58.698250 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d is running failed: container process not found" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:58 crc kubenswrapper[4893]: E0121 07:00:58.698296 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-t86sj" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="registry-server" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.776372 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:00:58 crc kubenswrapper[4893]: I0121 07:00:58.924533 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t86sj" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.004616 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a is running failed: container process not found" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.004983 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a is running failed: container process not found" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.005254 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a is running failed: container process not found" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.005287 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-8bgpm" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="registry-server" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.049125 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.095800 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzrm\" (UniqueName: \"kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm\") pod \"be44c297-715e-45f6-b165-244c39484f15\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.095876 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content\") pod \"be44c297-715e-45f6-b165-244c39484f15\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.095925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities\") pod \"be44c297-715e-45f6-b165-244c39484f15\" (UID: \"be44c297-715e-45f6-b165-244c39484f15\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.097360 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities" (OuterVolumeSpecName: "utilities") pod "be44c297-715e-45f6-b165-244c39484f15" (UID: "be44c297-715e-45f6-b165-244c39484f15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.102767 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm" (OuterVolumeSpecName: "kube-api-access-hmzrm") pod "be44c297-715e-45f6-b165-244c39484f15" (UID: "be44c297-715e-45f6-b165-244c39484f15"). InnerVolumeSpecName "kube-api-access-hmzrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.149263 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be44c297-715e-45f6-b165-244c39484f15" (UID: "be44c297-715e-45f6-b165-244c39484f15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.197755 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws7jj\" (UniqueName: \"kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj\") pod \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.197813 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities\") pod \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.197889 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content\") pod \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\" (UID: \"2514419f-4c60-442d-bbc7-0c9b8c765cc4\") " Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.198153 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzrm\" (UniqueName: \"kubernetes.io/projected/be44c297-715e-45f6-b165-244c39484f15-kube-api-access-hmzrm\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.198169 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.198178 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be44c297-715e-45f6-b165-244c39484f15-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.199220 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities" (OuterVolumeSpecName: "utilities") pod "2514419f-4c60-442d-bbc7-0c9b8c765cc4" (UID: "2514419f-4c60-442d-bbc7-0c9b8c765cc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.201051 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj" (OuterVolumeSpecName: "kube-api-access-ws7jj") pod "2514419f-4c60-442d-bbc7-0c9b8c765cc4" (UID: "2514419f-4c60-442d-bbc7-0c9b8c765cc4"). InnerVolumeSpecName "kube-api-access-ws7jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.248401 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2514419f-4c60-442d-bbc7-0c9b8c765cc4" (UID: "2514419f-4c60-442d-bbc7-0c9b8c765cc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.299594 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws7jj\" (UniqueName: \"kubernetes.io/projected/2514419f-4c60-442d-bbc7-0c9b8c765cc4-kube-api-access-ws7jj\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.299642 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.299659 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2514419f-4c60-442d-bbc7-0c9b8c765cc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.342949 4893 generic.go:334] "Generic (PLEG): container finished" podID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" exitCode=0 Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.343066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerDied","Data":"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a"} Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.343071 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bgpm" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.343151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bgpm" event={"ID":"2514419f-4c60-442d-bbc7-0c9b8c765cc4","Type":"ContainerDied","Data":"05b7e9594d77e484701f6552c95a1887450c91d0bc6dc8c7c49f49107bf7e3d2"} Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.343224 4893 scope.go:117] "RemoveContainer" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.346128 4893 generic.go:334] "Generic (PLEG): container finished" podID="be44c297-715e-45f6-b165-244c39484f15" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" exitCode=0 Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.346218 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t86sj" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.346232 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerDied","Data":"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d"} Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.346333 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t86sj" event={"ID":"be44c297-715e-45f6-b165-244c39484f15","Type":"ContainerDied","Data":"0a1082100ae5b224e283306d3631031ec6a52b5812d7d59a658cfe843a49374e"} Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.374349 4893 scope.go:117] "RemoveContainer" containerID="adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.388413 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.395719 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bgpm"] Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.402394 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.407047 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t86sj"] Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.409517 4893 scope.go:117] "RemoveContainer" containerID="8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.425658 4893 scope.go:117] "RemoveContainer" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.426353 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a\": container with ID starting with 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a not found: ID does not exist" containerID="1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.426538 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a"} err="failed to get container status \"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a\": rpc error: code = NotFound desc = could not find container \"1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a\": container with ID starting with 1bebeb105c8d65c04699978ada83e80e5a52886aad94e7cad8f6ae2acaad975a not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.426691 4893 scope.go:117] "RemoveContainer" containerID="adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.427127 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea\": container with ID starting with adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea not found: ID does not exist" containerID="adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.427161 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea"} err="failed to get container status \"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea\": rpc error: code = NotFound desc = could not find container \"adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea\": container with ID starting with adb96632c9997f51677fdf01098c030fd42d92a76852a57dec55f5b337a8cbea not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.427187 4893 scope.go:117] "RemoveContainer" containerID="8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.427548 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c\": container with ID starting with 8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c not found: ID does not exist" containerID="8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.427575 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c"} err="failed to get container status \"8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c\": rpc error: code = NotFound desc = could not find container \"8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c\": container with ID starting with 8d80f0b8b93601dfaee94e6e8ec28d09716c651d623887fd56734948638c4e9c not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.427593 4893 scope.go:117] "RemoveContainer" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.442235 4893 scope.go:117] "RemoveContainer" containerID="f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.459370 4893 scope.go:117] "RemoveContainer" containerID="6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.491415 4893 scope.go:117] "RemoveContainer" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.492234 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d\": container with ID starting with 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d not found: ID does not exist" containerID="52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.492267 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d"} err="failed to get container status \"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d\": rpc error: code = NotFound desc = could not find container \"52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d\": container with ID starting with 52c547879e3b4d65287c908ef33383dc84a87279c7f97ebd4752ff8c64c7ba4d not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.492290 4893 scope.go:117] "RemoveContainer" containerID="f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.495377 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f\": container with ID starting with f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f not found: ID does not exist" containerID="f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.495400 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f"} err="failed to get container status \"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f\": rpc error: code = NotFound desc = could not find container \"f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f\": container with ID starting with f297d6fd178fa4cd728c91f196380b23b5286f4afb3f3e3cd00d5d4197db1b7f not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.495417 4893 scope.go:117] "RemoveContainer" containerID="6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da" Jan 21 07:00:59 crc kubenswrapper[4893]: E0121 07:00:59.497321 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da\": container with ID starting with 6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da not found: ID does not exist" containerID="6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.497384 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da"} err="failed to get container status \"6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da\": rpc error: code = NotFound desc = could not find container \"6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da\": container with ID starting with 6e90638c7e2fe84da46c56cec7926888efc3ac30d454ea845a2742b8420340da not found: ID does not exist" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.590129 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" path="/var/lib/kubelet/pods/2514419f-4c60-442d-bbc7-0c9b8c765cc4/volumes" Jan 21 07:00:59 crc kubenswrapper[4893]: I0121 07:00:59.590939 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be44c297-715e-45f6-b165-244c39484f15" path="/var/lib/kubelet/pods/be44c297-715e-45f6-b165-244c39484f15/volumes" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.228900 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.229168 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dpt49" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="registry-server" containerID="cri-o://0a7ab862873cb85dca16c1cbb25f2bf1b69edef599ad269c10ad7843a22fcc76" gracePeriod=2 Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.355292 4893 generic.go:334] "Generic (PLEG): container finished" podID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerID="0a7ab862873cb85dca16c1cbb25f2bf1b69edef599ad269c10ad7843a22fcc76" exitCode=0 Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.355370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerDied","Data":"0a7ab862873cb85dca16c1cbb25f2bf1b69edef599ad269c10ad7843a22fcc76"} Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.654274 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.720193 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content\") pod \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.720547 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities\") pod \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.720597 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hks2h\" (UniqueName: \"kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h\") pod \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\" (UID: \"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06\") " Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.721892 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities" (OuterVolumeSpecName: "utilities") pod "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" (UID: "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.729868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h" (OuterVolumeSpecName: "kube-api-access-hks2h") pod "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" (UID: "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06"). InnerVolumeSpecName "kube-api-access-hks2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.747343 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" (UID: "58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.821786 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.821827 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hks2h\" (UniqueName: \"kubernetes.io/projected/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-kube-api-access-hks2h\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4893]: I0121 07:01:00.821842 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.229721 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.230050 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4s5jn" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="registry-server" containerID="cri-o://d2df099c10c30f49e9d8e3efa06b7ab9d76e2ca1ddf24231ce626c559b91ae4c" gracePeriod=2 Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.375737 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dpt49" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.375727 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dpt49" event={"ID":"58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06","Type":"ContainerDied","Data":"ec40c58ec72ad1743bbfebb40d225141d438df25e9b3e0409a9ada86c347a67f"} Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.383467 4893 scope.go:117] "RemoveContainer" containerID="0a7ab862873cb85dca16c1cbb25f2bf1b69edef599ad269c10ad7843a22fcc76" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.395586 4893 generic.go:334] "Generic (PLEG): container finished" podID="76395561-db8b-4fac-a5fd-14267030252a" containerID="d2df099c10c30f49e9d8e3efa06b7ab9d76e2ca1ddf24231ce626c559b91ae4c" exitCode=0 Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.395655 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerDied","Data":"d2df099c10c30f49e9d8e3efa06b7ab9d76e2ca1ddf24231ce626c559b91ae4c"} Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.408585 4893 scope.go:117] "RemoveContainer" containerID="d71ce5950a0fe4a1a1362fb2324f67b0633d83a6caa802dd4384b515728c02da" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.431939 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.434968 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dpt49"] Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.464157 4893 scope.go:117] "RemoveContainer" containerID="19fa4571bcf0da674aec8fba766ee789dfa7efd01ea3b63433febf79eb05ba29" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.591385 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" path="/var/lib/kubelet/pods/58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06/volumes" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.728339 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.870143 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd2v2\" (UniqueName: \"kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2\") pod \"76395561-db8b-4fac-a5fd-14267030252a\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.870289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities\") pod \"76395561-db8b-4fac-a5fd-14267030252a\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.870561 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content\") pod \"76395561-db8b-4fac-a5fd-14267030252a\" (UID: \"76395561-db8b-4fac-a5fd-14267030252a\") " Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.871453 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities" (OuterVolumeSpecName: "utilities") pod "76395561-db8b-4fac-a5fd-14267030252a" (UID: "76395561-db8b-4fac-a5fd-14267030252a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.876039 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2" (OuterVolumeSpecName: "kube-api-access-zd2v2") pod "76395561-db8b-4fac-a5fd-14267030252a" (UID: "76395561-db8b-4fac-a5fd-14267030252a"). InnerVolumeSpecName "kube-api-access-zd2v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.971913 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd2v2\" (UniqueName: \"kubernetes.io/projected/76395561-db8b-4fac-a5fd-14267030252a-kube-api-access-zd2v2\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4893]: I0121 07:01:01.971952 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.019758 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76395561-db8b-4fac-a5fd-14267030252a" (UID: "76395561-db8b-4fac-a5fd-14267030252a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.073470 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76395561-db8b-4fac-a5fd-14267030252a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.402155 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s5jn" event={"ID":"76395561-db8b-4fac-a5fd-14267030252a","Type":"ContainerDied","Data":"9050c610c8fcde67db812c93e91008cb203c9b493de5d90545180018c8c95956"} Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.402252 4893 scope.go:117] "RemoveContainer" containerID="d2df099c10c30f49e9d8e3efa06b7ab9d76e2ca1ddf24231ce626c559b91ae4c" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.402268 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s5jn" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.425272 4893 scope.go:117] "RemoveContainer" containerID="89a6d80ca59e8ae507eb004e60365233070b672b6286b941ca70de352b6dbfa4" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.448941 4893 scope.go:117] "RemoveContainer" containerID="0d372fcf8f1041071c54de18c59a9cd18168c4e9ea51543d5e2798771f13580e" Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.454874 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 07:01:02 crc kubenswrapper[4893]: I0121 07:01:02.467258 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4s5jn"] Jan 21 07:01:03 crc kubenswrapper[4893]: I0121 07:01:03.588522 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76395561-db8b-4fac-a5fd-14267030252a" path="/var/lib/kubelet/pods/76395561-db8b-4fac-a5fd-14267030252a/volumes" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529105 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xjz9j"] Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529863 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529878 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529889 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529895 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529907 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529916 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529924 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529933 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529946 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529952 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529964 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529971 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.529986 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.529992 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.530003 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530011 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.530021 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530028 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.530038 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530045 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="extract-utilities" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.530054 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530061 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="extract-content" Jan 21 07:01:10 crc kubenswrapper[4893]: E0121 07:01:10.530070 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530075 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530165 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="58afbc98-0ff5-4eec-9ffb-3b9a1a8c6b06" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530179 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2514419f-4c60-442d-bbc7-0c9b8c765cc4" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530186 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="76395561-db8b-4fac-a5fd-14267030252a" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530197 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="be44c297-715e-45f6-b165-244c39484f15" containerName="registry-server" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.530806 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.555283 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xjz9j"] Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.644635 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-trusted-ca\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.644721 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ba22997-066e-4ce0-88ea-2821b9794092-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.644755 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-registry-certificates\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.644877 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.645038 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-bound-sa-token\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.645126 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxjf\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-kube-api-access-kcxjf\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.645166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-registry-tls\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.645276 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ba22997-066e-4ce0-88ea-2821b9794092-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.683481 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747445 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-trusted-ca\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747636 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ba22997-066e-4ce0-88ea-2821b9794092-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747718 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-registry-certificates\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747801 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-bound-sa-token\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747834 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxjf\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-kube-api-access-kcxjf\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-registry-tls\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.747959 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ba22997-066e-4ce0-88ea-2821b9794092-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.748793 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ba22997-066e-4ce0-88ea-2821b9794092-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.748967 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-trusted-ca\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.750692 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ba22997-066e-4ce0-88ea-2821b9794092-registry-certificates\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.753867 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ba22997-066e-4ce0-88ea-2821b9794092-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.754359 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-registry-tls\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.766344 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxjf\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-kube-api-access-kcxjf\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.802216 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ba22997-066e-4ce0-88ea-2821b9794092-bound-sa-token\") pod \"image-registry-66df7c8f76-xjz9j\" (UID: \"4ba22997-066e-4ce0-88ea-2821b9794092\") " pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:10 crc kubenswrapper[4893]: I0121 07:01:10.851814 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:11 crc kubenswrapper[4893]: I0121 07:01:11.561067 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xjz9j"] Jan 21 07:01:12 crc kubenswrapper[4893]: I0121 07:01:12.560845 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" event={"ID":"4ba22997-066e-4ce0-88ea-2821b9794092","Type":"ContainerStarted","Data":"7a09f4f3baa572a3dd54418462f103ecfb541588417fac7fc96510d4b132b5e1"} Jan 21 07:01:12 crc kubenswrapper[4893]: I0121 07:01:12.561210 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" event={"ID":"4ba22997-066e-4ce0-88ea-2821b9794092","Type":"ContainerStarted","Data":"f8b57022c967d462c820b782cd2e5fb6e4e5c139dc838590f24ddd92f8d0d336"} Jan 21 07:01:12 crc kubenswrapper[4893]: I0121 07:01:12.561245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:12 crc kubenswrapper[4893]: I0121 07:01:12.586848 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" podStartSLOduration=2.5868047 podStartE2EDuration="2.5868047s" podCreationTimestamp="2026-01-21 07:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:01:12.583018098 +0000 UTC m=+413.813364000" watchObservedRunningTime="2026-01-21 07:01:12.5868047 +0000 UTC m=+413.817150602" Jan 21 07:01:14 crc kubenswrapper[4893]: I0121 07:01:14.871555 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:01:14 crc kubenswrapper[4893]: I0121 07:01:14.872248 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" podUID="ae416462-7fdb-40bc-8fda-91eb26e2d538" containerName="route-controller-manager" containerID="cri-o://10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30" gracePeriod=30 Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.348925 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.459815 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert\") pod \"ae416462-7fdb-40bc-8fda-91eb26e2d538\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.459924 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca\") pod \"ae416462-7fdb-40bc-8fda-91eb26e2d538\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.459956 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2pg\" (UniqueName: \"kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg\") pod \"ae416462-7fdb-40bc-8fda-91eb26e2d538\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.460029 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config\") pod \"ae416462-7fdb-40bc-8fda-91eb26e2d538\" (UID: \"ae416462-7fdb-40bc-8fda-91eb26e2d538\") " Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.461212 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config" (OuterVolumeSpecName: "config") pod "ae416462-7fdb-40bc-8fda-91eb26e2d538" (UID: "ae416462-7fdb-40bc-8fda-91eb26e2d538"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.461284 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae416462-7fdb-40bc-8fda-91eb26e2d538" (UID: "ae416462-7fdb-40bc-8fda-91eb26e2d538"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.467814 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg" (OuterVolumeSpecName: "kube-api-access-hb2pg") pod "ae416462-7fdb-40bc-8fda-91eb26e2d538" (UID: "ae416462-7fdb-40bc-8fda-91eb26e2d538"). InnerVolumeSpecName "kube-api-access-hb2pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.467946 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae416462-7fdb-40bc-8fda-91eb26e2d538" (UID: "ae416462-7fdb-40bc-8fda-91eb26e2d538"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.562728 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae416462-7fdb-40bc-8fda-91eb26e2d538-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.562866 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.562880 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb2pg\" (UniqueName: \"kubernetes.io/projected/ae416462-7fdb-40bc-8fda-91eb26e2d538-kube-api-access-hb2pg\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.562895 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae416462-7fdb-40bc-8fda-91eb26e2d538-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.582994 4893 generic.go:334] "Generic (PLEG): container finished" podID="ae416462-7fdb-40bc-8fda-91eb26e2d538" containerID="10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30" exitCode=0 Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.583090 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" event={"ID":"ae416462-7fdb-40bc-8fda-91eb26e2d538","Type":"ContainerDied","Data":"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30"} Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.583127 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" event={"ID":"ae416462-7fdb-40bc-8fda-91eb26e2d538","Type":"ContainerDied","Data":"f0dd2af9c01e1d0b1d981fd26cd5a9398bf62d10ec516fca78619d24004252d6"} Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.583177 4893 scope.go:117] "RemoveContainer" containerID="10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.583354 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.608756 4893 scope.go:117] "RemoveContainer" containerID="10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30" Jan 21 07:01:15 crc kubenswrapper[4893]: E0121 07:01:15.609313 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30\": container with ID starting with 10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30 not found: ID does not exist" containerID="10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.609370 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30"} err="failed to get container status \"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30\": rpc error: code = NotFound desc = could not find container \"10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30\": container with ID starting with 10babac5dc8e7755e6a3d15f373d1265a18ed012b601abf480e06dbe672b7e30 not found: ID does not exist" Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.621189 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:01:15 crc kubenswrapper[4893]: I0121 07:01:15.628476 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb4dd9bc-btzt5"] Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.211048 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq"] Jan 21 07:01:16 crc kubenswrapper[4893]: E0121 07:01:16.211887 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae416462-7fdb-40bc-8fda-91eb26e2d538" containerName="route-controller-manager" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.211909 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae416462-7fdb-40bc-8fda-91eb26e2d538" containerName="route-controller-manager" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.212078 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae416462-7fdb-40bc-8fda-91eb26e2d538" containerName="route-controller-manager" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.212885 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217511 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217612 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217632 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217729 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217624 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.217997 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.227734 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq"] Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.276700 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-config\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.276805 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2w6f\" (UniqueName: \"kubernetes.io/projected/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-kube-api-access-z2w6f\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.276911 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-client-ca\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.276949 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-serving-cert\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.379314 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-client-ca\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.379429 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-serving-cert\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.379464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-config\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.379488 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2w6f\" (UniqueName: \"kubernetes.io/projected/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-kube-api-access-z2w6f\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.381707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-config\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.382506 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-client-ca\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.386192 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-serving-cert\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.399380 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2w6f\" (UniqueName: \"kubernetes.io/projected/68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d-kube-api-access-z2w6f\") pod \"route-controller-manager-5b8688684c-rfstq\" (UID: \"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d\") " pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:16 crc kubenswrapper[4893]: I0121 07:01:16.530524 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.006021 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq"] Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.647503 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae416462-7fdb-40bc-8fda-91eb26e2d538" path="/var/lib/kubelet/pods/ae416462-7fdb-40bc-8fda-91eb26e2d538/volumes" Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.656335 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" event={"ID":"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d","Type":"ContainerStarted","Data":"efdf206a6efdb093ab5d8aef7813bf2da06bd3f1e334f86ce946fe2303ebcc98"} Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.656389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" event={"ID":"68eae2bd-3a7a-49aa-ae8a-bdfe6b284f0d","Type":"ContainerStarted","Data":"62ba3eb77865210300e38f32ad67828e5d502972c290c61a5289c12719c51152"} Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.656606 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.672134 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" Jan 21 07:01:17 crc kubenswrapper[4893]: I0121 07:01:17.689510 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8688684c-rfstq" podStartSLOduration=3.68947534 podStartE2EDuration="3.68947534s" podCreationTimestamp="2026-01-21 07:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:01:17.687050705 +0000 UTC m=+418.917396607" watchObservedRunningTime="2026-01-21 07:01:17.68947534 +0000 UTC m=+418.919821242" Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.899159 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.903175 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nq4z8" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="registry-server" containerID="cri-o://390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e" gracePeriod=30 Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.912155 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.913195 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kjxh2" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="registry-server" containerID="cri-o://cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d" gracePeriod=30 Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.917126 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.921240 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" containerID="cri-o://ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56" gracePeriod=30 Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.922993 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.923282 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ztll7" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="registry-server" containerID="cri-o://8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874" gracePeriod=30 Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.930471 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.930812 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gv7xc" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="registry-server" containerID="cri-o://6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc" gracePeriod=30 Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.953585 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rhqg"] Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.954428 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:24 crc kubenswrapper[4893]: I0121 07:01:24.974623 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rhqg"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.058868 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l6bd\" (UniqueName: \"kubernetes.io/projected/2138e3c3-e583-4a97-84d9-084c1eb72e2a-kube-api-access-7l6bd\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.059008 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.059038 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.162064 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.162173 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.162226 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l6bd\" (UniqueName: \"kubernetes.io/projected/2138e3c3-e583-4a97-84d9-084c1eb72e2a-kube-api-access-7l6bd\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.164287 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.171974 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2138e3c3-e583-4a97-84d9-084c1eb72e2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.183789 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l6bd\" (UniqueName: \"kubernetes.io/projected/2138e3c3-e583-4a97-84d9-084c1eb72e2a-kube-api-access-7l6bd\") pod \"marketplace-operator-79b997595-5rhqg\" (UID: \"2138e3c3-e583-4a97-84d9-084c1eb72e2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.288804 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.465952 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.635118 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.641421 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.648536 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.666990 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.672562 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmqkx\" (UniqueName: \"kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx\") pod \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.672689 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content\") pod \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.672738 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities\") pod \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\" (UID: \"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.674217 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities" (OuterVolumeSpecName: "utilities") pod "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" (UID: "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.678450 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx" (OuterVolumeSpecName: "kube-api-access-cmqkx") pod "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" (UID: "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd"). InnerVolumeSpecName "kube-api-access-cmqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.771908 4893 generic.go:334] "Generic (PLEG): container finished" podID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerID="390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.771981 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerDied","Data":"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.771999 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nq4z8" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.772023 4893 scope.go:117] "RemoveContainer" containerID="390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.772009 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nq4z8" event={"ID":"582d8449-096d-4bfa-9dcc-9ef0b8661d50","Type":"ContainerDied","Data":"6eb4bb67ca6cefb84b37e59e556fc8149ec426a398efa3d5b3a2b590083169ec"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773349 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content\") pod \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773450 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities\") pod \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773553 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics\") pod \"949c0965-b10c-4608-b2d0-effa8e19dff1\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773749 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities\") pod \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773850 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzz84\" (UniqueName: \"kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84\") pod \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\" (UID: \"582d8449-096d-4bfa-9dcc-9ef0b8661d50\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773923 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities\") pod \"15ac06c3-345b-4ced-8c19-2edf0c831b70\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.773998 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxfnk\" (UniqueName: \"kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk\") pod \"15ac06c3-345b-4ced-8c19-2edf0c831b70\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774079 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswnc\" (UniqueName: \"kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc\") pod \"949c0965-b10c-4608-b2d0-effa8e19dff1\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774162 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc8m2\" (UniqueName: \"kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2\") pod \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774278 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content\") pod \"15ac06c3-345b-4ced-8c19-2edf0c831b70\" (UID: \"15ac06c3-345b-4ced-8c19-2edf0c831b70\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774437 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca\") pod \"949c0965-b10c-4608-b2d0-effa8e19dff1\" (UID: \"949c0965-b10c-4608-b2d0-effa8e19dff1\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774540 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content\") pod \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\" (UID: \"78a7ed86-0417-446d-aeaa-b71f6beb71ec\") " Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774953 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmqkx\" (UniqueName: \"kubernetes.io/projected/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-kube-api-access-cmqkx\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.775065 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.774313 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities" (OuterVolumeSpecName: "utilities") pod "78a7ed86-0417-446d-aeaa-b71f6beb71ec" (UID: "78a7ed86-0417-446d-aeaa-b71f6beb71ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.775054 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities" (OuterVolumeSpecName: "utilities") pod "582d8449-096d-4bfa-9dcc-9ef0b8661d50" (UID: "582d8449-096d-4bfa-9dcc-9ef0b8661d50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.775264 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities" (OuterVolumeSpecName: "utilities") pod "15ac06c3-345b-4ced-8c19-2edf0c831b70" (UID: "15ac06c3-345b-4ced-8c19-2edf0c831b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.776130 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "949c0965-b10c-4608-b2d0-effa8e19dff1" (UID: "949c0965-b10c-4608-b2d0-effa8e19dff1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.778054 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "949c0965-b10c-4608-b2d0-effa8e19dff1" (UID: "949c0965-b10c-4608-b2d0-effa8e19dff1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.781427 4893 generic.go:334] "Generic (PLEG): container finished" podID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerID="8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.781529 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerDied","Data":"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.781569 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztll7" event={"ID":"78a7ed86-0417-446d-aeaa-b71f6beb71ec","Type":"ContainerDied","Data":"fcc37f8f0bb1485fdcae40afb01d0bc4bf2825933d35a803d55537499c3435e4"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.781658 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztll7" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.783021 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc" (OuterVolumeSpecName: "kube-api-access-wswnc") pod "949c0965-b10c-4608-b2d0-effa8e19dff1" (UID: "949c0965-b10c-4608-b2d0-effa8e19dff1"). InnerVolumeSpecName "kube-api-access-wswnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.793109 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2" (OuterVolumeSpecName: "kube-api-access-wc8m2") pod "78a7ed86-0417-446d-aeaa-b71f6beb71ec" (UID: "78a7ed86-0417-446d-aeaa-b71f6beb71ec"). InnerVolumeSpecName "kube-api-access-wc8m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.794478 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk" (OuterVolumeSpecName: "kube-api-access-wxfnk") pod "15ac06c3-345b-4ced-8c19-2edf0c831b70" (UID: "15ac06c3-345b-4ced-8c19-2edf0c831b70"). InnerVolumeSpecName "kube-api-access-wxfnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.805068 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84" (OuterVolumeSpecName: "kube-api-access-xzz84") pod "582d8449-096d-4bfa-9dcc-9ef0b8661d50" (UID: "582d8449-096d-4bfa-9dcc-9ef0b8661d50"). InnerVolumeSpecName "kube-api-access-xzz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.807101 4893 scope.go:117] "RemoveContainer" containerID="bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.811692 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a7ed86-0417-446d-aeaa-b71f6beb71ec" (UID: "78a7ed86-0417-446d-aeaa-b71f6beb71ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.819130 4893 generic.go:334] "Generic (PLEG): container finished" podID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerID="6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.819227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerDied","Data":"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.819299 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv7xc" event={"ID":"15ac06c3-345b-4ced-8c19-2edf0c831b70","Type":"ContainerDied","Data":"c6befa06fdb34667b32ed65b175fb525e606f8df848fd0d37729f2545ed53686"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.819378 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv7xc" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.819499 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" (UID: "f92d48d9-4ed9-42bb-b811-a8f43bbac2cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.828155 4893 generic.go:334] "Generic (PLEG): container finished" podID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerID="cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.828365 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxh2" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.828486 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerDied","Data":"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.828576 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxh2" event={"ID":"f92d48d9-4ed9-42bb-b811-a8f43bbac2cd","Type":"ContainerDied","Data":"977d7d579a1ff452b5a84436b7681f553b481ae57c16b7a9628faeb86c883a09"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.831332 4893 generic.go:334] "Generic (PLEG): container finished" podID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerID="ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.831473 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerDied","Data":"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.831527 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.831557 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zpm9z" event={"ID":"949c0965-b10c-4608-b2d0-effa8e19dff1","Type":"ContainerDied","Data":"6218074b9ae03f354de4cfcc6749275a4677d2f3ef928bd1e2056d67485f327e"} Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.840205 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "582d8449-096d-4bfa-9dcc-9ef0b8661d50" (UID: "582d8449-096d-4bfa-9dcc-9ef0b8661d50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.841043 4893 scope.go:117] "RemoveContainer" containerID="cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.860655 4893 scope.go:117] "RemoveContainer" containerID="390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.865030 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e\": container with ID starting with 390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e not found: ID does not exist" containerID="390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.865131 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e"} err="failed to get container status \"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e\": rpc error: code = NotFound desc = could not find container \"390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e\": container with ID starting with 390ee49a13a57a08e5faa565061ce3fcd3d8165040988d57c7f6067fb85fec4e not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.865190 4893 scope.go:117] "RemoveContainer" containerID="bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.867337 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63\": container with ID starting with bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63 not found: ID does not exist" containerID="bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.867972 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63"} err="failed to get container status \"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63\": rpc error: code = NotFound desc = could not find container \"bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63\": container with ID starting with bea99b73d232477af3de86c62a3fc3d6fadfccb19f32e62d7f9c8070a486ea63 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.868650 4893 scope.go:117] "RemoveContainer" containerID="cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.869343 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b\": container with ID starting with cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b not found: ID does not exist" containerID="cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.869375 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b"} err="failed to get container status \"cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b\": rpc error: code = NotFound desc = could not find container \"cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b\": container with ID starting with cf46f4de868de68a2d8b155a8b106089df858786a3965c8c0448d954a7fb352b not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.869400 4893 scope.go:117] "RemoveContainer" containerID="8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.870208 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.873946 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kjxh2"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876477 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876511 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876528 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876538 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876549 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a7ed86-0417-446d-aeaa-b71f6beb71ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876561 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/949c0965-b10c-4608-b2d0-effa8e19dff1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876575 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582d8449-096d-4bfa-9dcc-9ef0b8661d50-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876586 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzz84\" (UniqueName: \"kubernetes.io/projected/582d8449-096d-4bfa-9dcc-9ef0b8661d50-kube-api-access-xzz84\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876596 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876607 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxfnk\" (UniqueName: \"kubernetes.io/projected/15ac06c3-345b-4ced-8c19-2edf0c831b70-kube-api-access-wxfnk\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876617 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswnc\" (UniqueName: \"kubernetes.io/projected/949c0965-b10c-4608-b2d0-effa8e19dff1-kube-api-access-wswnc\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.876627 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc8m2\" (UniqueName: \"kubernetes.io/projected/78a7ed86-0417-446d-aeaa-b71f6beb71ec-kube-api-access-wc8m2\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.887950 4893 scope.go:117] "RemoveContainer" containerID="ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.902909 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.908461 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zpm9z"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.910386 4893 scope.go:117] "RemoveContainer" containerID="09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.911782 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rhqg"] Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.929513 4893 scope.go:117] "RemoveContainer" containerID="8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.930538 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874\": container with ID starting with 8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874 not found: ID does not exist" containerID="8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.930565 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874"} err="failed to get container status \"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874\": rpc error: code = NotFound desc = could not find container \"8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874\": container with ID starting with 8ad0c5d8c32aed0d8b378a0d58420a48f56232bb9c8ef2eb53976c5d3f921874 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.930594 4893 scope.go:117] "RemoveContainer" containerID="ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.930985 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da\": container with ID starting with ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da not found: ID does not exist" containerID="ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.931009 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da"} err="failed to get container status \"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da\": rpc error: code = NotFound desc = could not find container \"ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da\": container with ID starting with ae430f92c4577ae9429808c0e001e8c7b0f6973fdd49089be51f3719a838f4da not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.931025 4893 scope.go:117] "RemoveContainer" containerID="09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec" Jan 21 07:01:25 crc kubenswrapper[4893]: E0121 07:01:25.931293 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec\": container with ID starting with 09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec not found: ID does not exist" containerID="09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.931312 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec"} err="failed to get container status \"09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec\": rpc error: code = NotFound desc = could not find container \"09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec\": container with ID starting with 09678879fa0ed71f86539b331a3331c338eb032f4fe2bd34f45ca75d13a63cec not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.931330 4893 scope.go:117] "RemoveContainer" containerID="6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.940158 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15ac06c3-345b-4ced-8c19-2edf0c831b70" (UID: "15ac06c3-345b-4ced-8c19-2edf0c831b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.957514 4893 scope.go:117] "RemoveContainer" containerID="8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.977663 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ac06c3-345b-4ced-8c19-2edf0c831b70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4893]: I0121 07:01:25.989767 4893 scope.go:117] "RemoveContainer" containerID="417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.011059 4893 scope.go:117] "RemoveContainer" containerID="6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.012611 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc\": container with ID starting with 6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc not found: ID does not exist" containerID="6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.012741 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc"} err="failed to get container status \"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc\": rpc error: code = NotFound desc = could not find container \"6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc\": container with ID starting with 6e035160d709d1332922dd5a2145a3757c9276770bceb2f956394c75ed9f90bc not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.012779 4893 scope.go:117] "RemoveContainer" containerID="8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.013285 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba\": container with ID starting with 8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba not found: ID does not exist" containerID="8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.013334 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba"} err="failed to get container status \"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba\": rpc error: code = NotFound desc = could not find container \"8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba\": container with ID starting with 8401f4f5b4146b4f84b058da71f42880d0da3c6ddf2376a2bfe205059de6d0ba not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.013370 4893 scope.go:117] "RemoveContainer" containerID="417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.014426 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2\": container with ID starting with 417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2 not found: ID does not exist" containerID="417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.014451 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2"} err="failed to get container status \"417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2\": rpc error: code = NotFound desc = could not find container \"417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2\": container with ID starting with 417f7fa8d0c43a5aa86a61e68bd6162f1436e0f57b5dbf653e30712b229418c2 not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.014465 4893 scope.go:117] "RemoveContainer" containerID="cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.037846 4893 scope.go:117] "RemoveContainer" containerID="d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.058464 4893 scope.go:117] "RemoveContainer" containerID="30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.128579 4893 scope.go:117] "RemoveContainer" containerID="cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.129583 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d\": container with ID starting with cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d not found: ID does not exist" containerID="cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.129656 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d"} err="failed to get container status \"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d\": rpc error: code = NotFound desc = could not find container \"cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d\": container with ID starting with cd461098d345efc65577c6fc1041721da0a632d6c041e08785c685e173f1e82d not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.129712 4893 scope.go:117] "RemoveContainer" containerID="d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.130332 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c\": container with ID starting with d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c not found: ID does not exist" containerID="d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.130376 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c"} err="failed to get container status \"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c\": rpc error: code = NotFound desc = could not find container \"d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c\": container with ID starting with d52d6e4c0a2d2ae6cd6f3417b774f4804b5e37c67bf8d8db8b5fcc16eecd319c not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.130399 4893 scope.go:117] "RemoveContainer" containerID="30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.130660 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76\": container with ID starting with 30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76 not found: ID does not exist" containerID="30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.130697 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76"} err="failed to get container status \"30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76\": rpc error: code = NotFound desc = could not find container \"30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76\": container with ID starting with 30a46dd98e139e7a99693b109572aba46ee6d867ba054f323d7681ed8520af76 not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.130713 4893 scope.go:117] "RemoveContainer" containerID="ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.153371 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.154597 4893 scope.go:117] "RemoveContainer" containerID="d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.157089 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nq4z8"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.170448 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.175452 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztll7"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.178353 4893 scope.go:117] "RemoveContainer" containerID="ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.179013 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56\": container with ID starting with ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56 not found: ID does not exist" containerID="ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.179059 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56"} err="failed to get container status \"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56\": rpc error: code = NotFound desc = could not find container \"ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56\": container with ID starting with ab18e86fd24b9a6c1b24ee0a9ad6ffbfb069977d1a179ecb912ea935038dee56 not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.179087 4893 scope.go:117] "RemoveContainer" containerID="d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385" Jan 21 07:01:26 crc kubenswrapper[4893]: E0121 07:01:26.180901 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385\": container with ID starting with d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385 not found: ID does not exist" containerID="d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.180976 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385"} err="failed to get container status \"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385\": rpc error: code = NotFound desc = could not find container \"d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385\": container with ID starting with d74fb0142bc8d85c3b02f0be90e39d72f74253abb9817a92886abb026a719385 not found: ID does not exist" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.184693 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.191366 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gv7xc"] Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.840626 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" event={"ID":"2138e3c3-e583-4a97-84d9-084c1eb72e2a","Type":"ContainerStarted","Data":"8c600c4d35ab7178af90120442051b4004942149ebe1c50f47b2e37d07a386fb"} Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.841077 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" event={"ID":"2138e3c3-e583-4a97-84d9-084c1eb72e2a","Type":"ContainerStarted","Data":"90fcc7e1ce40267600268e504597e92b624a2a4d06747d2b986d3988fbdeee4e"} Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.841106 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.843077 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" Jan 21 07:01:26 crc kubenswrapper[4893]: I0121 07:01:26.867996 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5rhqg" podStartSLOduration=2.867966972 podStartE2EDuration="2.867966972s" podCreationTimestamp="2026-01-21 07:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:01:26.863997926 +0000 UTC m=+428.094343838" watchObservedRunningTime="2026-01-21 07:01:26.867966972 +0000 UTC m=+428.098312874" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.118721 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78fbz"] Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.118960 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.118972 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.118981 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.118987 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.118998 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119004 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119023 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119029 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119220 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119234 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119256 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119264 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119274 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119290 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119306 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119313 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119321 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119327 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119339 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119347 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119358 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119367 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119387 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119393 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119402 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119408 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="extract-utilities" Jan 21 07:01:27 crc kubenswrapper[4893]: E0121 07:01:27.119417 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119423 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="extract-content" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119521 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119534 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119542 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119550 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" containerName="marketplace-operator" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119556 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.119566 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" containerName="registry-server" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.120466 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.126760 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.131363 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78fbz"] Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.317549 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2x6tf"] Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.318949 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.324371 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.338305 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x6tf"] Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.346305 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-utilities\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.346731 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-catalog-content\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.346883 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ms9\" (UniqueName: \"kubernetes.io/projected/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-kube-api-access-l6ms9\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.447969 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-utilities\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448096 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-catalog-content\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448137 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ms9\" (UniqueName: \"kubernetes.io/projected/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-kube-api-access-l6ms9\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448190 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqlt\" (UniqueName: \"kubernetes.io/projected/939a64aa-242b-4e64-8d78-48770fb3063d-kube-api-access-xfqlt\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448226 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-catalog-content\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448274 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-utilities\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448582 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-utilities\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.448857 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-catalog-content\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.470766 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ms9\" (UniqueName: \"kubernetes.io/projected/bd1b9df0-d8a3-4418-9d7d-39413613fbfc-kube-api-access-l6ms9\") pod \"redhat-marketplace-78fbz\" (UID: \"bd1b9df0-d8a3-4418-9d7d-39413613fbfc\") " pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.549188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqlt\" (UniqueName: \"kubernetes.io/projected/939a64aa-242b-4e64-8d78-48770fb3063d-kube-api-access-xfqlt\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.549245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-catalog-content\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.549272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-utilities\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.549779 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-utilities\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.549896 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a64aa-242b-4e64-8d78-48770fb3063d-catalog-content\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.560092 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.564719 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqlt\" (UniqueName: \"kubernetes.io/projected/939a64aa-242b-4e64-8d78-48770fb3063d-kube-api-access-xfqlt\") pod \"redhat-operators-2x6tf\" (UID: \"939a64aa-242b-4e64-8d78-48770fb3063d\") " pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.589620 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ac06c3-345b-4ced-8c19-2edf0c831b70" path="/var/lib/kubelet/pods/15ac06c3-345b-4ced-8c19-2edf0c831b70/volumes" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.590401 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582d8449-096d-4bfa-9dcc-9ef0b8661d50" path="/var/lib/kubelet/pods/582d8449-096d-4bfa-9dcc-9ef0b8661d50/volumes" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.591127 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a7ed86-0417-446d-aeaa-b71f6beb71ec" path="/var/lib/kubelet/pods/78a7ed86-0417-446d-aeaa-b71f6beb71ec/volumes" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.592514 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949c0965-b10c-4608-b2d0-effa8e19dff1" path="/var/lib/kubelet/pods/949c0965-b10c-4608-b2d0-effa8e19dff1/volumes" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.595106 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92d48d9-4ed9-42bb-b811-a8f43bbac2cd" path="/var/lib/kubelet/pods/f92d48d9-4ed9-42bb-b811-a8f43bbac2cd/volumes" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.638552 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:27 crc kubenswrapper[4893]: I0121 07:01:27.986232 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78fbz"] Jan 21 07:01:27 crc kubenswrapper[4893]: W0121 07:01:27.988330 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1b9df0_d8a3_4418_9d7d_39413613fbfc.slice/crio-0f04ccafb4912db2a88fcd4c5421fd10892747804ab3e6028209c37edee92228 WatchSource:0}: Error finding container 0f04ccafb4912db2a88fcd4c5421fd10892747804ab3e6028209c37edee92228: Status 404 returned error can't find the container with id 0f04ccafb4912db2a88fcd4c5421fd10892747804ab3e6028209c37edee92228 Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.082550 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x6tf"] Jan 21 07:01:28 crc kubenswrapper[4893]: W0121 07:01:28.086603 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939a64aa_242b_4e64_8d78_48770fb3063d.slice/crio-5233e70950d26f482ef6b68cad93330da9f205ad02e10b51b72154aa8c0ecc3b WatchSource:0}: Error finding container 5233e70950d26f482ef6b68cad93330da9f205ad02e10b51b72154aa8c0ecc3b: Status 404 returned error can't find the container with id 5233e70950d26f482ef6b68cad93330da9f205ad02e10b51b72154aa8c0ecc3b Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.656440 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.657109 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.657185 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.658250 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.658342 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232" gracePeriod=600 Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.980134 4893 generic.go:334] "Generic (PLEG): container finished" podID="939a64aa-242b-4e64-8d78-48770fb3063d" containerID="5643a2d623b82512106382718242254f3ba9e7312ed42f2bb1e6acc8a9659946" exitCode=0 Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.980200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6tf" event={"ID":"939a64aa-242b-4e64-8d78-48770fb3063d","Type":"ContainerDied","Data":"5643a2d623b82512106382718242254f3ba9e7312ed42f2bb1e6acc8a9659946"} Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.980229 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6tf" event={"ID":"939a64aa-242b-4e64-8d78-48770fb3063d","Type":"ContainerStarted","Data":"5233e70950d26f482ef6b68cad93330da9f205ad02e10b51b72154aa8c0ecc3b"} Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.990164 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232" exitCode=0 Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.990243 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232"} Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.990274 4893 scope.go:117] "RemoveContainer" containerID="59520d6be8547ef44262866e4c11b1ae43ae8ef545545a93c291f5e238718a75" Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.994144 4893 generic.go:334] "Generic (PLEG): container finished" podID="bd1b9df0-d8a3-4418-9d7d-39413613fbfc" containerID="d48cade721b06531c7b5306fdff42d97f371d8ccf5cd52de8b9d9b51d01004fb" exitCode=0 Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.994336 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78fbz" event={"ID":"bd1b9df0-d8a3-4418-9d7d-39413613fbfc","Type":"ContainerDied","Data":"d48cade721b06531c7b5306fdff42d97f371d8ccf5cd52de8b9d9b51d01004fb"} Jan 21 07:01:28 crc kubenswrapper[4893]: I0121 07:01:28.994395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78fbz" event={"ID":"bd1b9df0-d8a3-4418-9d7d-39413613fbfc","Type":"ContainerStarted","Data":"0f04ccafb4912db2a88fcd4c5421fd10892747804ab3e6028209c37edee92228"} Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.590076 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kpngd"] Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.592696 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.596644 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpngd"] Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.598321 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.720657 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kdhdx"] Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.722855 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.724739 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.726739 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdhdx"] Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754010 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t666\" (UniqueName: \"kubernetes.io/projected/5d6e0099-366c-4a80-9911-88b9a1ac3224-kube-api-access-5t666\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-utilities\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754129 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-utilities\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754174 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-catalog-content\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754196 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbpk\" (UniqueName: \"kubernetes.io/projected/97ad217b-b5b4-49ff-9a11-e6e78e871f69-kube-api-access-kzbpk\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.754245 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-catalog-content\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.855232 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-utilities\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.855639 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-catalog-content\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.855772 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbpk\" (UniqueName: \"kubernetes.io/projected/97ad217b-b5b4-49ff-9a11-e6e78e871f69-kube-api-access-kzbpk\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.855899 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-catalog-content\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.855985 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-utilities\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.856205 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6e0099-366c-4a80-9911-88b9a1ac3224-catalog-content\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.856366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t666\" (UniqueName: \"kubernetes.io/projected/5d6e0099-366c-4a80-9911-88b9a1ac3224-kube-api-access-5t666\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.856491 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-catalog-content\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.856921 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-utilities\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.857041 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ad217b-b5b4-49ff-9a11-e6e78e871f69-utilities\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.879069 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbpk\" (UniqueName: \"kubernetes.io/projected/97ad217b-b5b4-49ff-9a11-e6e78e871f69-kube-api-access-kzbpk\") pod \"community-operators-kdhdx\" (UID: \"97ad217b-b5b4-49ff-9a11-e6e78e871f69\") " pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.879408 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t666\" (UniqueName: \"kubernetes.io/projected/5d6e0099-366c-4a80-9911-88b9a1ac3224-kube-api-access-5t666\") pod \"certified-operators-kpngd\" (UID: \"5d6e0099-366c-4a80-9911-88b9a1ac3224\") " pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:29 crc kubenswrapper[4893]: I0121 07:01:29.928915 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.002601 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f"} Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.044528 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.365045 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kpngd"] Jan 21 07:01:30 crc kubenswrapper[4893]: W0121 07:01:30.369263 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6e0099_366c_4a80_9911_88b9a1ac3224.slice/crio-1f62285593ae7cfbfa0d633fde0763913690be30cbd91866982392d76f607fe8 WatchSource:0}: Error finding container 1f62285593ae7cfbfa0d633fde0763913690be30cbd91866982392d76f607fe8: Status 404 returned error can't find the container with id 1f62285593ae7cfbfa0d633fde0763913690be30cbd91866982392d76f607fe8 Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.728155 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdhdx"] Jan 21 07:01:30 crc kubenswrapper[4893]: W0121 07:01:30.836036 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ad217b_b5b4_49ff_9a11_e6e78e871f69.slice/crio-7720810d20dee1fcea8cd6e44f26623efe9747b0b0eb2d512fd8973ea81fe8ec WatchSource:0}: Error finding container 7720810d20dee1fcea8cd6e44f26623efe9747b0b0eb2d512fd8973ea81fe8ec: Status 404 returned error can't find the container with id 7720810d20dee1fcea8cd6e44f26623efe9747b0b0eb2d512fd8973ea81fe8ec Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.862215 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xjz9j" Jan 21 07:01:30 crc kubenswrapper[4893]: I0121 07:01:30.931084 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.113331 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6tf" event={"ID":"939a64aa-242b-4e64-8d78-48770fb3063d","Type":"ContainerStarted","Data":"02d9a25bc26082540b84c17007e068c82af93872eea13d3a38425d25e24b6d9e"} Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.117429 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdhdx" event={"ID":"97ad217b-b5b4-49ff-9a11-e6e78e871f69","Type":"ContainerStarted","Data":"7720810d20dee1fcea8cd6e44f26623efe9747b0b0eb2d512fd8973ea81fe8ec"} Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.118637 4893 generic.go:334] "Generic (PLEG): container finished" podID="5d6e0099-366c-4a80-9911-88b9a1ac3224" containerID="6eb8762924145a3eb6ad5367f3f9ce5ba37af6e55c78c986713365772fe4d913" exitCode=0 Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.118778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpngd" event={"ID":"5d6e0099-366c-4a80-9911-88b9a1ac3224","Type":"ContainerDied","Data":"6eb8762924145a3eb6ad5367f3f9ce5ba37af6e55c78c986713365772fe4d913"} Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.118800 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpngd" event={"ID":"5d6e0099-366c-4a80-9911-88b9a1ac3224","Type":"ContainerStarted","Data":"1f62285593ae7cfbfa0d633fde0763913690be30cbd91866982392d76f607fe8"} Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.135857 4893 generic.go:334] "Generic (PLEG): container finished" podID="bd1b9df0-d8a3-4418-9d7d-39413613fbfc" containerID="efe057118fa3f0c12da00a5944157e0bfcf2189ffdfb6505ea792af4973c1930" exitCode=0 Jan 21 07:01:31 crc kubenswrapper[4893]: I0121 07:01:31.151768 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78fbz" event={"ID":"bd1b9df0-d8a3-4418-9d7d-39413613fbfc","Type":"ContainerDied","Data":"efe057118fa3f0c12da00a5944157e0bfcf2189ffdfb6505ea792af4973c1930"} Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.144710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78fbz" event={"ID":"bd1b9df0-d8a3-4418-9d7d-39413613fbfc","Type":"ContainerStarted","Data":"80d1c00e3b8510697e6cc04166ac083840c673509a6c40e937d2268eea78c888"} Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.149147 4893 generic.go:334] "Generic (PLEG): container finished" podID="939a64aa-242b-4e64-8d78-48770fb3063d" containerID="02d9a25bc26082540b84c17007e068c82af93872eea13d3a38425d25e24b6d9e" exitCode=0 Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.149251 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6tf" event={"ID":"939a64aa-242b-4e64-8d78-48770fb3063d","Type":"ContainerDied","Data":"02d9a25bc26082540b84c17007e068c82af93872eea13d3a38425d25e24b6d9e"} Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.152756 4893 generic.go:334] "Generic (PLEG): container finished" podID="97ad217b-b5b4-49ff-9a11-e6e78e871f69" containerID="af760f2441afb225a45de98ee16fde47923a4e18e9aa6ed5176cf70a04ad9d7b" exitCode=0 Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.152805 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdhdx" event={"ID":"97ad217b-b5b4-49ff-9a11-e6e78e871f69","Type":"ContainerDied","Data":"af760f2441afb225a45de98ee16fde47923a4e18e9aa6ed5176cf70a04ad9d7b"} Jan 21 07:01:32 crc kubenswrapper[4893]: I0121 07:01:32.189199 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78fbz" podStartSLOduration=2.355586653 podStartE2EDuration="5.189177281s" podCreationTimestamp="2026-01-21 07:01:27 +0000 UTC" firstStartedPulling="2026-01-21 07:01:28.995911051 +0000 UTC m=+430.226256953" lastFinishedPulling="2026-01-21 07:01:31.829501679 +0000 UTC m=+433.059847581" observedRunningTime="2026-01-21 07:01:32.16450247 +0000 UTC m=+433.394848392" watchObservedRunningTime="2026-01-21 07:01:32.189177281 +0000 UTC m=+433.419523183" Jan 21 07:01:33 crc kubenswrapper[4893]: I0121 07:01:33.161249 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6tf" event={"ID":"939a64aa-242b-4e64-8d78-48770fb3063d","Type":"ContainerStarted","Data":"cd1e05dd24a60785df50d60141953d3ee145a84099d5bbcc2a9ebecb352c771e"} Jan 21 07:01:33 crc kubenswrapper[4893]: I0121 07:01:33.163822 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpngd" event={"ID":"5d6e0099-366c-4a80-9911-88b9a1ac3224","Type":"ContainerStarted","Data":"0ff0b01a12a1dd1e1900c98730c79d53e8dd0ac844ad23b35e437a865684fd20"} Jan 21 07:01:33 crc kubenswrapper[4893]: I0121 07:01:33.184555 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2x6tf" podStartSLOduration=2.581051158 podStartE2EDuration="6.184538157s" podCreationTimestamp="2026-01-21 07:01:27 +0000 UTC" firstStartedPulling="2026-01-21 07:01:28.981747221 +0000 UTC m=+430.212093123" lastFinishedPulling="2026-01-21 07:01:32.58523422 +0000 UTC m=+433.815580122" observedRunningTime="2026-01-21 07:01:33.182898003 +0000 UTC m=+434.413243915" watchObservedRunningTime="2026-01-21 07:01:33.184538157 +0000 UTC m=+434.414884059" Jan 21 07:01:34 crc kubenswrapper[4893]: I0121 07:01:34.172275 4893 generic.go:334] "Generic (PLEG): container finished" podID="97ad217b-b5b4-49ff-9a11-e6e78e871f69" containerID="4b04651dc932d066fc8db3dfb91b8ccec00253138132e5f77ae6a28650f47394" exitCode=0 Jan 21 07:01:34 crc kubenswrapper[4893]: I0121 07:01:34.172364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdhdx" event={"ID":"97ad217b-b5b4-49ff-9a11-e6e78e871f69","Type":"ContainerDied","Data":"4b04651dc932d066fc8db3dfb91b8ccec00253138132e5f77ae6a28650f47394"} Jan 21 07:01:34 crc kubenswrapper[4893]: I0121 07:01:34.177029 4893 generic.go:334] "Generic (PLEG): container finished" podID="5d6e0099-366c-4a80-9911-88b9a1ac3224" containerID="0ff0b01a12a1dd1e1900c98730c79d53e8dd0ac844ad23b35e437a865684fd20" exitCode=0 Jan 21 07:01:34 crc kubenswrapper[4893]: I0121 07:01:34.177095 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpngd" event={"ID":"5d6e0099-366c-4a80-9911-88b9a1ac3224","Type":"ContainerDied","Data":"0ff0b01a12a1dd1e1900c98730c79d53e8dd0ac844ad23b35e437a865684fd20"} Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.094827 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.095404 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" podUID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" containerName="controller-manager" containerID="cri-o://eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3" gracePeriod=30 Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.576125 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.736078 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config\") pod \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.736145 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zsd\" (UniqueName: \"kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd\") pod \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.736231 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca\") pod \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.736269 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert\") pod \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.736310 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles\") pod \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\" (UID: \"b5557d7d-8cbe-4371-a58f-ebee3d46b285\") " Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.737220 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5557d7d-8cbe-4371-a58f-ebee3d46b285" (UID: "b5557d7d-8cbe-4371-a58f-ebee3d46b285"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.737238 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config" (OuterVolumeSpecName: "config") pod "b5557d7d-8cbe-4371-a58f-ebee3d46b285" (UID: "b5557d7d-8cbe-4371-a58f-ebee3d46b285"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.737837 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b5557d7d-8cbe-4371-a58f-ebee3d46b285" (UID: "b5557d7d-8cbe-4371-a58f-ebee3d46b285"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.741959 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd" (OuterVolumeSpecName: "kube-api-access-v7zsd") pod "b5557d7d-8cbe-4371-a58f-ebee3d46b285" (UID: "b5557d7d-8cbe-4371-a58f-ebee3d46b285"). InnerVolumeSpecName "kube-api-access-v7zsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.742639 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5557d7d-8cbe-4371-a58f-ebee3d46b285" (UID: "b5557d7d-8cbe-4371-a58f-ebee3d46b285"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.837442 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.837481 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7zsd\" (UniqueName: \"kubernetes.io/projected/b5557d7d-8cbe-4371-a58f-ebee3d46b285-kube-api-access-v7zsd\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.837496 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.837508 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5557d7d-8cbe-4371-a58f-ebee3d46b285-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:35 crc kubenswrapper[4893]: I0121 07:01:35.837519 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5557d7d-8cbe-4371-a58f-ebee3d46b285-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.203775 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdhdx" event={"ID":"97ad217b-b5b4-49ff-9a11-e6e78e871f69","Type":"ContainerStarted","Data":"1aef5a095e9d954cb8cd1baa9e93bddfa62bc16038bd1ae43dd71b3fae2984f7"} Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.210032 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kpngd" event={"ID":"5d6e0099-366c-4a80-9911-88b9a1ac3224","Type":"ContainerStarted","Data":"ccfa8a7775cf6a07eae6dc57039d952f58bb5174b4969be53b32aeda42edaf59"} Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.212072 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.212176 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" event={"ID":"b5557d7d-8cbe-4371-a58f-ebee3d46b285","Type":"ContainerDied","Data":"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3"} Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.212258 4893 scope.go:117] "RemoveContainer" containerID="eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.211970 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" containerID="eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3" exitCode=0 Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.219289 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb" event={"ID":"b5557d7d-8cbe-4371-a58f-ebee3d46b285","Type":"ContainerDied","Data":"bf14152d7f224a0b39d5c553ab978b8eaf65cd1841d75d80cf84745ba5fcd7ee"} Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.228602 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kdhdx" podStartSLOduration=4.010671936 podStartE2EDuration="7.228581957s" podCreationTimestamp="2026-01-21 07:01:29 +0000 UTC" firstStartedPulling="2026-01-21 07:01:32.154590254 +0000 UTC m=+433.384936156" lastFinishedPulling="2026-01-21 07:01:35.372500265 +0000 UTC m=+436.602846177" observedRunningTime="2026-01-21 07:01:36.227907909 +0000 UTC m=+437.458253811" watchObservedRunningTime="2026-01-21 07:01:36.228581957 +0000 UTC m=+437.458927859" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.232891 4893 scope.go:117] "RemoveContainer" containerID="eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3" Jan 21 07:01:36 crc kubenswrapper[4893]: E0121 07:01:36.235368 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3\": container with ID starting with eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3 not found: ID does not exist" containerID="eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.235519 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3"} err="failed to get container status \"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3\": rpc error: code = NotFound desc = could not find container \"eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3\": container with ID starting with eaf447eb5962e778c2f0f757b4fca688c156ddacb2002300a248d59a9b99e3e3 not found: ID does not exist" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.259167 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kpngd" podStartSLOduration=3.224619512 podStartE2EDuration="7.259143946s" podCreationTimestamp="2026-01-21 07:01:29 +0000 UTC" firstStartedPulling="2026-01-21 07:01:31.119922525 +0000 UTC m=+432.350268427" lastFinishedPulling="2026-01-21 07:01:35.154446959 +0000 UTC m=+436.384792861" observedRunningTime="2026-01-21 07:01:36.255014975 +0000 UTC m=+437.485360877" watchObservedRunningTime="2026-01-21 07:01:36.259143946 +0000 UTC m=+437.489489858" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.272397 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84c79f497d-vnrlz"] Jan 21 07:01:36 crc kubenswrapper[4893]: E0121 07:01:36.272806 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" containerName="controller-manager" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.272836 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" containerName="controller-manager" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.272973 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" containerName="controller-manager" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.273664 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.276110 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.276368 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.276180 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.276236 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.277112 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.277215 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.288956 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.289747 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6b99b96-dg5hb"] Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.300007 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.301841 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c79f497d-vnrlz"] Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.444132 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-config\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.444212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-proxy-ca-bundles\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.444271 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-client-ca\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.444363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4r7j\" (UniqueName: \"kubernetes.io/projected/82841d0f-5677-4c37-8567-32c4543c03c2-kube-api-access-n4r7j\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.444393 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841d0f-5677-4c37-8567-32c4543c03c2-serving-cert\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.545202 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4r7j\" (UniqueName: \"kubernetes.io/projected/82841d0f-5677-4c37-8567-32c4543c03c2-kube-api-access-n4r7j\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.545268 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841d0f-5677-4c37-8567-32c4543c03c2-serving-cert\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.545304 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-config\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.545332 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-proxy-ca-bundles\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.545392 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-client-ca\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.546599 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-client-ca\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.546786 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-proxy-ca-bundles\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.546852 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841d0f-5677-4c37-8567-32c4543c03c2-config\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.550426 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841d0f-5677-4c37-8567-32c4543c03c2-serving-cert\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.566747 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4r7j\" (UniqueName: \"kubernetes.io/projected/82841d0f-5677-4c37-8567-32c4543c03c2-kube-api-access-n4r7j\") pod \"controller-manager-84c79f497d-vnrlz\" (UID: \"82841d0f-5677-4c37-8567-32c4543c03c2\") " pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.604952 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:36 crc kubenswrapper[4893]: I0121 07:01:36.838115 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84c79f497d-vnrlz"] Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.228309 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" event={"ID":"82841d0f-5677-4c37-8567-32c4543c03c2","Type":"ContainerStarted","Data":"341f0fc8f52bcbbd4b968cf1e24e0ab9f26564fe7226c6f400b46346ac9dbd95"} Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.229427 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" event={"ID":"82841d0f-5677-4c37-8567-32c4543c03c2","Type":"ContainerStarted","Data":"24d84953b6c9244ce9e58a1065d9a7cb6035d838f499746a8528f3a9ce212308"} Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.229548 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.234744 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.251615 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84c79f497d-vnrlz" podStartSLOduration=2.251591324 podStartE2EDuration="2.251591324s" podCreationTimestamp="2026-01-21 07:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:01:37.251316646 +0000 UTC m=+438.481662568" watchObservedRunningTime="2026-01-21 07:01:37.251591324 +0000 UTC m=+438.481937236" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.576812 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.577170 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.592534 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5557d7d-8cbe-4371-a58f-ebee3d46b285" path="/var/lib/kubelet/pods/b5557d7d-8cbe-4371-a58f-ebee3d46b285/volumes" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.639180 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.640529 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:37 crc kubenswrapper[4893]: I0121 07:01:37.645760 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:38 crc kubenswrapper[4893]: I0121 07:01:38.275105 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78fbz" Jan 21 07:01:38 crc kubenswrapper[4893]: I0121 07:01:38.678742 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2x6tf" podUID="939a64aa-242b-4e64-8d78-48770fb3063d" containerName="registry-server" probeResult="failure" output=< Jan 21 07:01:38 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:01:38 crc kubenswrapper[4893]: > Jan 21 07:01:39 crc kubenswrapper[4893]: I0121 07:01:39.929466 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:39 crc kubenswrapper[4893]: I0121 07:01:39.930759 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:39 crc kubenswrapper[4893]: I0121 07:01:39.975334 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:40 crc kubenswrapper[4893]: I0121 07:01:40.046217 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:40 crc kubenswrapper[4893]: I0121 07:01:40.046263 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:40 crc kubenswrapper[4893]: I0121 07:01:40.137488 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:40 crc kubenswrapper[4893]: I0121 07:01:40.292968 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kpngd" Jan 21 07:01:47 crc kubenswrapper[4893]: I0121 07:01:47.679985 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:47 crc kubenswrapper[4893]: I0121 07:01:47.720857 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2x6tf" Jan 21 07:01:50 crc kubenswrapper[4893]: I0121 07:01:50.083345 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kdhdx" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.085639 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" podUID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" containerName="registry" containerID="cri-o://31d0b4854664057301dbc351c8c92cb70030fff767801d2e5854c65cb929f25c" gracePeriod=30 Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.339498 4893 generic.go:334] "Generic (PLEG): container finished" podID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" containerID="31d0b4854664057301dbc351c8c92cb70030fff767801d2e5854c65cb929f25c" exitCode=0 Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.339565 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" event={"ID":"9b746e69-b4ab-4cba-8b09-7556ffc5cad9","Type":"ContainerDied","Data":"31d0b4854664057301dbc351c8c92cb70030fff767801d2e5854c65cb929f25c"} Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.689377 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.854996 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855078 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855115 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855355 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855425 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855497 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855525 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.855553 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcthz\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz\") pod \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\" (UID: \"9b746e69-b4ab-4cba-8b09-7556ffc5cad9\") " Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.856548 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.859051 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.867382 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.867870 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.869645 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz" (OuterVolumeSpecName: "kube-api-access-bcthz") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "kube-api-access-bcthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.873566 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.873932 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.880698 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9b746e69-b4ab-4cba-8b09-7556ffc5cad9" (UID: "9b746e69-b4ab-4cba-8b09-7556ffc5cad9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957581 4893 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957640 4893 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957652 4893 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957690 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957701 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957741 4893 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:56 crc kubenswrapper[4893]: I0121 07:01:56.957750 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcthz\" (UniqueName: \"kubernetes.io/projected/9b746e69-b4ab-4cba-8b09-7556ffc5cad9-kube-api-access-bcthz\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.346332 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" event={"ID":"9b746e69-b4ab-4cba-8b09-7556ffc5cad9","Type":"ContainerDied","Data":"ded4e50bb566b719ab3da7fe0fc4a081cd6a7921c312b644f9eb3647ed243dfb"} Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.346381 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tz8g4" Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.346420 4893 scope.go:117] "RemoveContainer" containerID="31d0b4854664057301dbc351c8c92cb70030fff767801d2e5854c65cb929f25c" Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.376033 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.380142 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tz8g4"] Jan 21 07:01:57 crc kubenswrapper[4893]: I0121 07:01:57.592687 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" path="/var/lib/kubelet/pods/9b746e69-b4ab-4cba-8b09-7556ffc5cad9/volumes" Jan 21 07:03:58 crc kubenswrapper[4893]: I0121 07:03:58.656858 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:03:58 crc kubenswrapper[4893]: I0121 07:03:58.657516 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:04:28 crc kubenswrapper[4893]: I0121 07:04:28.656578 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:04:28 crc kubenswrapper[4893]: I0121 07:04:28.657552 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:04:58 crc kubenswrapper[4893]: I0121 07:04:58.656957 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:04:58 crc kubenswrapper[4893]: I0121 07:04:58.657637 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:04:58 crc kubenswrapper[4893]: I0121 07:04:58.657760 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:04:58 crc kubenswrapper[4893]: I0121 07:04:58.658755 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:04:58 crc kubenswrapper[4893]: I0121 07:04:58.658833 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f" gracePeriod=600 Jan 21 07:04:59 crc kubenswrapper[4893]: I0121 07:04:59.504979 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f" exitCode=0 Jan 21 07:04:59 crc kubenswrapper[4893]: I0121 07:04:59.505077 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f"} Jan 21 07:04:59 crc kubenswrapper[4893]: I0121 07:04:59.505743 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba"} Jan 21 07:04:59 crc kubenswrapper[4893]: I0121 07:04:59.505834 4893 scope.go:117] "RemoveContainer" containerID="8f7067b47f82d2bb0d676445d6ea974a24da36b4a8f269831103214d2d596232" Jan 21 07:06:58 crc kubenswrapper[4893]: I0121 07:06:58.657432 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:06:58 crc kubenswrapper[4893]: I0121 07:06:58.658073 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:07:24 crc kubenswrapper[4893]: I0121 07:07:24.010964 4893 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 07:07:28 crc kubenswrapper[4893]: I0121 07:07:28.656540 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:07:28 crc kubenswrapper[4893]: I0121 07:07:28.657119 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:07:58 crc kubenswrapper[4893]: I0121 07:07:58.657416 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:07:58 crc kubenswrapper[4893]: I0121 07:07:58.658099 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:07:58 crc kubenswrapper[4893]: I0121 07:07:58.658172 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:07:58 crc kubenswrapper[4893]: I0121 07:07:58.659034 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:07:58 crc kubenswrapper[4893]: I0121 07:07:58.659143 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba" gracePeriod=600 Jan 21 07:07:59 crc kubenswrapper[4893]: I0121 07:07:59.641097 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba" exitCode=0 Jan 21 07:07:59 crc kubenswrapper[4893]: I0121 07:07:59.641190 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba"} Jan 21 07:07:59 crc kubenswrapper[4893]: I0121 07:07:59.641730 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f"} Jan 21 07:07:59 crc kubenswrapper[4893]: I0121 07:07:59.641793 4893 scope.go:117] "RemoveContainer" containerID="c423587255df35151734438b4bde73c48010b8d5f29a57fe10e74184eadb881f" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.301988 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzsg6"] Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.303543 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-controller" containerID="cri-o://22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.303722 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="northd" containerID="cri-o://ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.304097 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="sbdb" containerID="cri-o://fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.304146 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="nbdb" containerID="cri-o://e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.304199 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-node" containerID="cri-o://967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.304235 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.304272 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-acl-logging" containerID="cri-o://bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.365506 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" containerID="cri-o://7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" gracePeriod=30 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.582371 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/3.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.585233 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovn-acl-logging/0.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.585728 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovn-controller/0.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.588060 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.598080 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/2.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.598772 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/1.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.598845 4893 generic.go:334] "Generic (PLEG): container finished" podID="ecb64775-90e7-43a2-a5a8-4d73e348dcc4" containerID="195c12ac6c297634c8ec3caa12286ce86474bd4ffa41f09ca2b9933123488f7c" exitCode=2 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.598886 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerDied","Data":"195c12ac6c297634c8ec3caa12286ce86474bd4ffa41f09ca2b9933123488f7c"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.598986 4893 scope.go:117] "RemoveContainer" containerID="11d8bbd1c92382018299e790a7597f3f588b11c6465db90a876cc98e1d10d4a9" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.599488 4893 scope.go:117] "RemoveContainer" containerID="195c12ac6c297634c8ec3caa12286ce86474bd4ffa41f09ca2b9933123488f7c" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.602172 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovnkube-controller/3.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.611829 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovn-acl-logging/0.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.613512 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzsg6_6719fb30-da06-4964-b730-09e444618d94/ovn-controller/0.log" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615603 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615643 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615653 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615662 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615689 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615698 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" exitCode=0 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615708 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" exitCode=143 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615716 4893 generic.go:334] "Generic (PLEG): container finished" podID="6719fb30-da06-4964-b730-09e444618d94" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" exitCode=143 Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615744 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615779 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615796 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615808 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615822 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615833 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615857 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615879 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615886 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615892 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615900 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615917 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615929 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615936 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615947 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615956 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615968 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615983 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.615993 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616000 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616007 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616015 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616021 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616028 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616035 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616041 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616048 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616058 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616068 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616075 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616082 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616087 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616094 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616099 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616106 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616123 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616130 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616136 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616150 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" event={"ID":"6719fb30-da06-4964-b730-09e444618d94","Type":"ContainerDied","Data":"fdfc30e4324f373ef418e02201a091fc892f0100545a2099c061bd374aba586a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616168 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616176 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616183 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616194 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616202 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616208 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616216 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616222 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616229 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616236 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.616361 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzsg6" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.660548 4893 scope.go:117] "RemoveContainer" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.664551 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pj2ss"] Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.665981 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-acl-logging" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666016 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-acl-logging" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666037 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666048 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666060 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kubecfg-setup" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666069 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kubecfg-setup" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666078 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666086 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666095 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-node" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666102 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-node" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666115 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="northd" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666123 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="northd" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666135 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666143 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666155 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666162 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666171 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666179 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666189 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="sbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666197 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="sbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666209 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="nbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666217 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="nbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666230 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" containerName="registry" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666238 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" containerName="registry" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666362 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666374 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666384 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="sbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666393 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666401 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovn-acl-logging" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666409 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666417 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666425 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666433 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="nbdb" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666444 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="kube-rbac-proxy-node" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666455 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b746e69-b4ab-4cba-8b09-7556ffc5cad9" containerName="registry" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666463 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="northd" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666566 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666578 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.666594 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666602 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.666783 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6719fb30-da06-4964-b730-09e444618d94" containerName="ovnkube-controller" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.686995 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.728210 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.756038 4893 scope.go:117] "RemoveContainer" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.758848 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.758971 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759619 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759791 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759773 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759814 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash" (OuterVolumeSpecName: "host-slash") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759828 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759857 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759894 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759918 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759937 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759916 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760004 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket" (OuterVolumeSpecName: "log-socket") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759950 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log" (OuterVolumeSpecName: "node-log") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.759961 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxcrt\" (UniqueName: \"kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760041 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760090 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760126 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760165 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760187 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760210 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760232 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760257 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760300 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760342 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760371 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760421 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760443 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin\") pod \"6719fb30-da06-4964-b730-09e444618d94\" (UID: \"6719fb30-da06-4964-b730-09e444618d94\") " Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760652 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760708 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-script-lib\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760742 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovn-node-metrics-cert\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760780 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-bin\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760786 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760818 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-kubelet\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760855 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-var-lib-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760878 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-slash\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760908 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-ovn\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760931 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-systemd-units\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760936 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760984 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.760986 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761005 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761016 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761024 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761112 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761186 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-config\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761221 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-netns\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761251 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-etc-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761268 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-log-socket\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-netd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761411 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-node-log\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761486 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761584 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-systemd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761608 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-env-overrides\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761648 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpz6\" (UniqueName: \"kubernetes.io/projected/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-kube-api-access-sxpz6\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761743 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761759 4893 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761772 4893 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761785 4893 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761798 4893 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761813 4893 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761838 4893 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761855 4893 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761866 4893 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761877 4893 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761886 4893 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761894 4893 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761906 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761914 4893 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761924 4893 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.761934 4893 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.762393 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.768891 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.770134 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt" (OuterVolumeSpecName: "kube-api-access-lxcrt") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "kube-api-access-lxcrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.773974 4893 scope.go:117] "RemoveContainer" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.778544 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6719fb30-da06-4964-b730-09e444618d94" (UID: "6719fb30-da06-4964-b730-09e444618d94"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.792614 4893 scope.go:117] "RemoveContainer" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.809247 4893 scope.go:117] "RemoveContainer" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.824396 4893 scope.go:117] "RemoveContainer" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.844276 4893 scope.go:117] "RemoveContainer" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.861919 4893 scope.go:117] "RemoveContainer" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862737 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-var-lib-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862789 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-slash\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862823 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-ovn\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862847 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-systemd-units\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862855 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-var-lib-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-config\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862925 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-slash\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862951 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-netns\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862929 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-netns\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862975 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-ovn\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.862995 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-etc-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863005 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-systemd-units\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863019 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-log-socket\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863036 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-etc-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863046 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863064 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-log-socket\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863068 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-netd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863092 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-node-log\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863114 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-netd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863121 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863143 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-node-log\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863090 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-run-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863178 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863191 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-systemd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863220 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-env-overrides\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpz6\" (UniqueName: \"kubernetes.io/projected/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-kube-api-access-sxpz6\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863269 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-systemd\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863277 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863302 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-script-lib\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863331 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovn-node-metrics-cert\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863361 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-bin\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863385 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-kubelet\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863618 4893 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6719fb30-da06-4964-b730-09e444618d94-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863631 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxcrt\" (UniqueName: \"kubernetes.io/projected/6719fb30-da06-4964-b730-09e444618d94-kube-api-access-lxcrt\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863644 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6719fb30-da06-4964-b730-09e444618d94-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863655 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6719fb30-da06-4964-b730-09e444618d94-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863761 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-kubelet\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.863981 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-env-overrides\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.864025 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-config\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.864084 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-run-openvswitch\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.864194 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-host-cni-bin\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.864704 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovnkube-script-lib\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.868813 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-ovn-node-metrics-cert\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.891976 4893 scope.go:117] "RemoveContainer" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.895930 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpz6\" (UniqueName: \"kubernetes.io/projected/b76c7998-2150-4cdf-9b0e-6c84f2ec599c-kube-api-access-sxpz6\") pod \"ovnkube-node-pj2ss\" (UID: \"b76c7998-2150-4cdf-9b0e-6c84f2ec599c\") " pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.912522 4893 scope.go:117] "RemoveContainer" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.913166 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": container with ID starting with 7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85 not found: ID does not exist" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.913215 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} err="failed to get container status \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": rpc error: code = NotFound desc = could not find container \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": container with ID starting with 7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.913255 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.913662 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": container with ID starting with 70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7 not found: ID does not exist" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.913712 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} err="failed to get container status \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": rpc error: code = NotFound desc = could not find container \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": container with ID starting with 70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.913735 4893 scope.go:117] "RemoveContainer" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.914138 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": container with ID starting with fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318 not found: ID does not exist" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.914178 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} err="failed to get container status \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": rpc error: code = NotFound desc = could not find container \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": container with ID starting with fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.914199 4893 scope.go:117] "RemoveContainer" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.914659 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": container with ID starting with e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b not found: ID does not exist" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.914746 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} err="failed to get container status \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": rpc error: code = NotFound desc = could not find container \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": container with ID starting with e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.914787 4893 scope.go:117] "RemoveContainer" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.915743 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": container with ID starting with ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba not found: ID does not exist" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.915772 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} err="failed to get container status \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": rpc error: code = NotFound desc = could not find container \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": container with ID starting with ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.915793 4893 scope.go:117] "RemoveContainer" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.916046 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": container with ID starting with 26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437 not found: ID does not exist" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916080 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} err="failed to get container status \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": rpc error: code = NotFound desc = could not find container \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": container with ID starting with 26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916106 4893 scope.go:117] "RemoveContainer" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.916388 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": container with ID starting with 967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598 not found: ID does not exist" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916426 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} err="failed to get container status \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": rpc error: code = NotFound desc = could not find container \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": container with ID starting with 967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916443 4893 scope.go:117] "RemoveContainer" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.916656 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": container with ID starting with bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b not found: ID does not exist" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916697 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} err="failed to get container status \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": rpc error: code = NotFound desc = could not find container \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": container with ID starting with bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.916714 4893 scope.go:117] "RemoveContainer" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.925655 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": container with ID starting with 22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a not found: ID does not exist" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.925693 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} err="failed to get container status \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": rpc error: code = NotFound desc = could not find container \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": container with ID starting with 22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.925709 4893 scope.go:117] "RemoveContainer" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: E0121 07:09:51.926074 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": container with ID starting with 9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630 not found: ID does not exist" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926095 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} err="failed to get container status \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": rpc error: code = NotFound desc = could not find container \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": container with ID starting with 9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926124 4893 scope.go:117] "RemoveContainer" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926392 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} err="failed to get container status \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": rpc error: code = NotFound desc = could not find container \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": container with ID starting with 7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926421 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926660 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} err="failed to get container status \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": rpc error: code = NotFound desc = could not find container \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": container with ID starting with 70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926708 4893 scope.go:117] "RemoveContainer" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926953 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} err="failed to get container status \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": rpc error: code = NotFound desc = could not find container \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": container with ID starting with fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.926985 4893 scope.go:117] "RemoveContainer" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.927322 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} err="failed to get container status \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": rpc error: code = NotFound desc = could not find container \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": container with ID starting with e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.927382 4893 scope.go:117] "RemoveContainer" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.927745 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} err="failed to get container status \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": rpc error: code = NotFound desc = could not find container \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": container with ID starting with ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.927768 4893 scope.go:117] "RemoveContainer" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928027 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} err="failed to get container status \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": rpc error: code = NotFound desc = could not find container \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": container with ID starting with 26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928060 4893 scope.go:117] "RemoveContainer" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928383 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} err="failed to get container status \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": rpc error: code = NotFound desc = could not find container \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": container with ID starting with 967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928428 4893 scope.go:117] "RemoveContainer" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928786 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} err="failed to get container status \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": rpc error: code = NotFound desc = could not find container \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": container with ID starting with bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.928811 4893 scope.go:117] "RemoveContainer" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929226 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} err="failed to get container status \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": rpc error: code = NotFound desc = could not find container \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": container with ID starting with 22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929253 4893 scope.go:117] "RemoveContainer" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929588 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} err="failed to get container status \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": rpc error: code = NotFound desc = could not find container \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": container with ID starting with 9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929617 4893 scope.go:117] "RemoveContainer" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929907 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} err="failed to get container status \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": rpc error: code = NotFound desc = could not find container \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": container with ID starting with 7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.929927 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930179 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} err="failed to get container status \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": rpc error: code = NotFound desc = could not find container \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": container with ID starting with 70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930204 4893 scope.go:117] "RemoveContainer" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930423 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} err="failed to get container status \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": rpc error: code = NotFound desc = could not find container \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": container with ID starting with fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930442 4893 scope.go:117] "RemoveContainer" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930687 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} err="failed to get container status \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": rpc error: code = NotFound desc = could not find container \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": container with ID starting with e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930706 4893 scope.go:117] "RemoveContainer" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930929 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} err="failed to get container status \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": rpc error: code = NotFound desc = could not find container \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": container with ID starting with ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.930946 4893 scope.go:117] "RemoveContainer" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931215 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} err="failed to get container status \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": rpc error: code = NotFound desc = could not find container \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": container with ID starting with 26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931234 4893 scope.go:117] "RemoveContainer" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931418 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} err="failed to get container status \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": rpc error: code = NotFound desc = could not find container \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": container with ID starting with 967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931434 4893 scope.go:117] "RemoveContainer" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931736 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} err="failed to get container status \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": rpc error: code = NotFound desc = could not find container \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": container with ID starting with bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.931764 4893 scope.go:117] "RemoveContainer" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932007 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} err="failed to get container status \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": rpc error: code = NotFound desc = could not find container \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": container with ID starting with 22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932027 4893 scope.go:117] "RemoveContainer" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932227 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} err="failed to get container status \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": rpc error: code = NotFound desc = could not find container \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": container with ID starting with 9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932243 4893 scope.go:117] "RemoveContainer" containerID="7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932421 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85"} err="failed to get container status \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": rpc error: code = NotFound desc = could not find container \"7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85\": container with ID starting with 7119163b616f0932c423835ba174ca55866abf7abad503517ec73241844c5f85 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932441 4893 scope.go:117] "RemoveContainer" containerID="70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932869 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7"} err="failed to get container status \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": rpc error: code = NotFound desc = could not find container \"70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7\": container with ID starting with 70b2799a6ad8653010bec92688cf587a90a5a8bfa94c71d5151cf9ffe2ac65d7 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.932890 4893 scope.go:117] "RemoveContainer" containerID="fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933118 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318"} err="failed to get container status \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": rpc error: code = NotFound desc = could not find container \"fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318\": container with ID starting with fb89c84cc0e9e33f9ca53812432dae259a34be3f20896a2ad849afe9cf4eb318 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933140 4893 scope.go:117] "RemoveContainer" containerID="e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933358 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b"} err="failed to get container status \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": rpc error: code = NotFound desc = could not find container \"e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b\": container with ID starting with e42366e4121087422449e2904fb511c7a7fbb5d7faae3062c309bf334084715b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933378 4893 scope.go:117] "RemoveContainer" containerID="ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933693 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba"} err="failed to get container status \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": rpc error: code = NotFound desc = could not find container \"ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba\": container with ID starting with ee6a66e139270d624fbac38c491412ee57cedba6493ae1996899ad4a37a4e0ba not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933710 4893 scope.go:117] "RemoveContainer" containerID="26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933957 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437"} err="failed to get container status \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": rpc error: code = NotFound desc = could not find container \"26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437\": container with ID starting with 26ffe5cf932c57df985cfd4e96d45d6c424f8f8c38e2d975993d6d0d4031d437 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.933984 4893 scope.go:117] "RemoveContainer" containerID="967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934344 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598"} err="failed to get container status \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": rpc error: code = NotFound desc = could not find container \"967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598\": container with ID starting with 967c0374c3bb7293da92074bedc14c045d4ec7dad99c2ace59f5070693d5c598 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934369 4893 scope.go:117] "RemoveContainer" containerID="bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934626 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b"} err="failed to get container status \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": rpc error: code = NotFound desc = could not find container \"bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b\": container with ID starting with bca89354c660a806a3240b4ae2ecda31e7347a83b7fb0ff546118006feda6d8b not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934652 4893 scope.go:117] "RemoveContainer" containerID="22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934871 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a"} err="failed to get container status \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": rpc error: code = NotFound desc = could not find container \"22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a\": container with ID starting with 22fc335a1dcfda0c4f216c5c12b25cc9ce856498ac9c9e8430375e14441e8b2a not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.934888 4893 scope.go:117] "RemoveContainer" containerID="9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.936176 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630"} err="failed to get container status \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": rpc error: code = NotFound desc = could not find container \"9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630\": container with ID starting with 9098ccda4c0e3352903258cb4f49e40d3e65bed54287d676b2023f64f8a2f630 not found: ID does not exist" Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.951652 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzsg6"] Jan 21 07:09:51 crc kubenswrapper[4893]: I0121 07:09:51.956369 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzsg6"] Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.022906 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.627662 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/2.log" Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.628453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m8k4g" event={"ID":"ecb64775-90e7-43a2-a5a8-4d73e348dcc4","Type":"ContainerStarted","Data":"7223b645faa66de067775bec77cd3927c4c69a3e63e354c227ab06485422790b"} Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.630747 4893 generic.go:334] "Generic (PLEG): container finished" podID="b76c7998-2150-4cdf-9b0e-6c84f2ec599c" containerID="cd31e46a35ffd248c086e69963dffd8d293a6c1fba2e102d90bb249b1599105e" exitCode=0 Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.630843 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerDied","Data":"cd31e46a35ffd248c086e69963dffd8d293a6c1fba2e102d90bb249b1599105e"} Jan 21 07:09:52 crc kubenswrapper[4893]: I0121 07:09:52.630994 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"91a9ce06ee7704be0247f711d21b0a8e1d5845dec3aca228bb958993273efb33"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.591854 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6719fb30-da06-4964-b730-09e444618d94" path="/var/lib/kubelet/pods/6719fb30-da06-4964-b730-09e444618d94/volumes" Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642018 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"f7deb440d6a3265efe5c6398dfeaab5297674f504fa4537b309b59ac74ead967"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642081 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"87d6466be9a82ffdec7ea0fb3827f2fc955e089ffccea32d28d29a2d6b1e0e26"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642096 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"6a75f312b6049a03e6731e47d58566ff5ca50b013c8069ce709c08e02ec9eb83"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642108 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"e7b7f5f8099b2aa0306ed838519a11f8921e6181158c0d266239d9e8e3e81178"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642141 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"1230ee450b216ffc13cd10a87ee2e8fbfe4c7c400c9b917d044a4641f67e82cc"} Jan 21 07:09:53 crc kubenswrapper[4893]: I0121 07:09:53.642151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"65d2e47b97db6f805d1cf2b11fc58c88c619ac40096f94850176426de8ce7c6d"} Jan 21 07:09:56 crc kubenswrapper[4893]: I0121 07:09:56.677270 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"2f15c5f7f587ff7d984cc7f5bdd5fdea3d58c888d21af88acc473df31cd945ac"} Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.699945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" event={"ID":"b76c7998-2150-4cdf-9b0e-6c84f2ec599c","Type":"ContainerStarted","Data":"5be113d78c93960084ae1a559e1e70e5244956e3ff3fa4ad21f8edcf69c6ae9c"} Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.700539 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.700633 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.700717 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.731515 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.736250 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" podStartSLOduration=8.736216644 podStartE2EDuration="8.736216644s" podCreationTimestamp="2026-01-21 07:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:09:59.732943441 +0000 UTC m=+940.963289343" watchObservedRunningTime="2026-01-21 07:09:59.736216644 +0000 UTC m=+940.966562546" Jan 21 07:09:59 crc kubenswrapper[4893]: I0121 07:09:59.742183 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.910518 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rvxk6"] Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.911920 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.916633 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.916862 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.917066 4893 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-pvjjj" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.917117 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 07:10:02 crc kubenswrapper[4893]: I0121 07:10:02.922875 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rvxk6"] Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.030422 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.030473 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9gb\" (UniqueName: \"kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.030594 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.132772 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9gb\" (UniqueName: \"kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.132868 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.133018 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.134089 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.134122 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.154472 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9gb\" (UniqueName: \"kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb\") pod \"crc-storage-crc-rvxk6\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.237452 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.485627 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rvxk6"] Jan 21 07:10:03 crc kubenswrapper[4893]: W0121 07:10:03.490324 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fa4d27_753e_4b9c_af5f_aee8b4e3fbc1.slice/crio-12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219 WatchSource:0}: Error finding container 12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219: Status 404 returned error can't find the container with id 12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219 Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.492917 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:10:03 crc kubenswrapper[4893]: I0121 07:10:03.751924 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rvxk6" event={"ID":"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1","Type":"ContainerStarted","Data":"12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219"} Jan 21 07:10:05 crc kubenswrapper[4893]: I0121 07:10:05.764312 4893 generic.go:334] "Generic (PLEG): container finished" podID="69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" containerID="4cf13fbec06a81bdc1347e2f82690ea1f88b66ae678a5d71aefa05acaf27d87d" exitCode=0 Jan 21 07:10:05 crc kubenswrapper[4893]: I0121 07:10:05.764463 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rvxk6" event={"ID":"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1","Type":"ContainerDied","Data":"4cf13fbec06a81bdc1347e2f82690ea1f88b66ae678a5d71aefa05acaf27d87d"} Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.065868 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.154253 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9gb\" (UniqueName: \"kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb\") pod \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.154329 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt\") pod \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.154388 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage\") pod \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\" (UID: \"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1\") " Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.154489 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" (UID: "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.154837 4893 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.159556 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb" (OuterVolumeSpecName: "kube-api-access-ks9gb") pod "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" (UID: "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1"). InnerVolumeSpecName "kube-api-access-ks9gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.173993 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" (UID: "69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.255612 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9gb\" (UniqueName: \"kubernetes.io/projected/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-kube-api-access-ks9gb\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.255736 4893 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.777650 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rvxk6" event={"ID":"69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1","Type":"ContainerDied","Data":"12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219"} Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.778021 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f30ff33fc4e7d1ae0f4fd3b0c8d2aa57204631718772878675c82c17af9219" Jan 21 07:10:07 crc kubenswrapper[4893]: I0121 07:10:07.777732 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rvxk6" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.137507 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr"] Jan 21 07:10:16 crc kubenswrapper[4893]: E0121 07:10:16.138414 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" containerName="storage" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.138437 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" containerName="storage" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.138629 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fa4d27-753e-4b9c-af5f-aee8b4e3fbc1" containerName="storage" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.140054 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.142157 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.149774 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr"] Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.303972 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.304032 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.304083 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gm5\" (UniqueName: \"kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.405163 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.405255 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.405330 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gm5\" (UniqueName: \"kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.406614 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.406726 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.429252 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gm5\" (UniqueName: \"kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.459784 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.660114 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr"] Jan 21 07:10:16 crc kubenswrapper[4893]: W0121 07:10:16.669163 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb962be1e_48b4_482c_a8a6_c6346dbdc835.slice/crio-b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae WatchSource:0}: Error finding container b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae: Status 404 returned error can't find the container with id b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae Jan 21 07:10:16 crc kubenswrapper[4893]: I0121 07:10:16.829353 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" event={"ID":"b962be1e-48b4-482c-a8a6-c6346dbdc835","Type":"ContainerStarted","Data":"b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae"} Jan 21 07:10:17 crc kubenswrapper[4893]: I0121 07:10:17.838754 4893 generic.go:334] "Generic (PLEG): container finished" podID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerID="04a137845ff4013ab02592da2cde10676a6f1704561885e266109cf411870daa" exitCode=0 Jan 21 07:10:17 crc kubenswrapper[4893]: I0121 07:10:17.838801 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" event={"ID":"b962be1e-48b4-482c-a8a6-c6346dbdc835","Type":"ContainerDied","Data":"04a137845ff4013ab02592da2cde10676a6f1704561885e266109cf411870daa"} Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.138740 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.140944 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.159346 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.328573 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.328647 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.328754 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drjj\" (UniqueName: \"kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.429955 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.430027 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.430072 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6drjj\" (UniqueName: \"kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.430652 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.430740 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.455425 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6drjj\" (UniqueName: \"kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj\") pod \"redhat-operators-p7xdl\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.458842 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.677606 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.846133 4893 generic.go:334] "Generic (PLEG): container finished" podID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerID="8f712919cdae28cbf4cd96a9579fcbbc2b9c9839197572cf5709cea62ad271d4" exitCode=0 Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.846178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerDied","Data":"8f712919cdae28cbf4cd96a9579fcbbc2b9c9839197572cf5709cea62ad271d4"} Jan 21 07:10:18 crc kubenswrapper[4893]: I0121 07:10:18.846609 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerStarted","Data":"b72f8f86a08eb4e0ada366f3e01941574dc9728af21f352be083e13a8a147728"} Jan 21 07:10:19 crc kubenswrapper[4893]: I0121 07:10:19.856885 4893 generic.go:334] "Generic (PLEG): container finished" podID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerID="b1ca0f14d3b9fd2b78fb7a657332dd2ea6960f37893ffb93f59f93bb83e7d2fc" exitCode=0 Jan 21 07:10:19 crc kubenswrapper[4893]: I0121 07:10:19.857119 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" event={"ID":"b962be1e-48b4-482c-a8a6-c6346dbdc835","Type":"ContainerDied","Data":"b1ca0f14d3b9fd2b78fb7a657332dd2ea6960f37893ffb93f59f93bb83e7d2fc"} Jan 21 07:10:19 crc kubenswrapper[4893]: I0121 07:10:19.864465 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerStarted","Data":"3e45dfceeb80ce0786520740073131bc47af5ccc0e28aa6c14c07cc6e7c4ccca"} Jan 21 07:10:20 crc kubenswrapper[4893]: I0121 07:10:20.872122 4893 generic.go:334] "Generic (PLEG): container finished" podID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerID="28427f9c8fdbbfb915161e47d2676249a8315eb13bf927cec618808f2ad6b9d5" exitCode=0 Jan 21 07:10:20 crc kubenswrapper[4893]: I0121 07:10:20.873304 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" event={"ID":"b962be1e-48b4-482c-a8a6-c6346dbdc835","Type":"ContainerDied","Data":"28427f9c8fdbbfb915161e47d2676249a8315eb13bf927cec618808f2ad6b9d5"} Jan 21 07:10:21 crc kubenswrapper[4893]: I0121 07:10:21.879997 4893 generic.go:334] "Generic (PLEG): container finished" podID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerID="3e45dfceeb80ce0786520740073131bc47af5ccc0e28aa6c14c07cc6e7c4ccca" exitCode=0 Jan 21 07:10:21 crc kubenswrapper[4893]: I0121 07:10:21.880073 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerDied","Data":"3e45dfceeb80ce0786520740073131bc47af5ccc0e28aa6c14c07cc6e7c4ccca"} Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.046992 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pj2ss" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.114307 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.346399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle\") pod \"b962be1e-48b4-482c-a8a6-c6346dbdc835\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.346490 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gm5\" (UniqueName: \"kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5\") pod \"b962be1e-48b4-482c-a8a6-c6346dbdc835\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.346516 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util\") pod \"b962be1e-48b4-482c-a8a6-c6346dbdc835\" (UID: \"b962be1e-48b4-482c-a8a6-c6346dbdc835\") " Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.347300 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle" (OuterVolumeSpecName: "bundle") pod "b962be1e-48b4-482c-a8a6-c6346dbdc835" (UID: "b962be1e-48b4-482c-a8a6-c6346dbdc835"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.352217 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5" (OuterVolumeSpecName: "kube-api-access-v6gm5") pod "b962be1e-48b4-482c-a8a6-c6346dbdc835" (UID: "b962be1e-48b4-482c-a8a6-c6346dbdc835"). InnerVolumeSpecName "kube-api-access-v6gm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.361556 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util" (OuterVolumeSpecName: "util") pod "b962be1e-48b4-482c-a8a6-c6346dbdc835" (UID: "b962be1e-48b4-482c-a8a6-c6346dbdc835"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.447979 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gm5\" (UniqueName: \"kubernetes.io/projected/b962be1e-48b4-482c-a8a6-c6346dbdc835-kube-api-access-v6gm5\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.448287 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-util\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.448299 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b962be1e-48b4-482c-a8a6-c6346dbdc835-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.889064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerStarted","Data":"8b45c91ad8aef0af84c6ea577b5bd3695102b3ea56dcc05e5ea1c63641637666"} Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.904978 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" event={"ID":"b962be1e-48b4-482c-a8a6-c6346dbdc835","Type":"ContainerDied","Data":"b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae"} Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.905029 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00b48d2936c822bfc06a99ba36e72fed9659ef7a0530eafac89c820d71428ae" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.905035 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr" Jan 21 07:10:22 crc kubenswrapper[4893]: I0121 07:10:22.926524 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7xdl" podStartSLOduration=1.31786548 podStartE2EDuration="4.926491712s" podCreationTimestamp="2026-01-21 07:10:18 +0000 UTC" firstStartedPulling="2026-01-21 07:10:18.848034401 +0000 UTC m=+960.078380303" lastFinishedPulling="2026-01-21 07:10:22.456660633 +0000 UTC m=+963.687006535" observedRunningTime="2026-01-21 07:10:22.920305367 +0000 UTC m=+964.150651269" watchObservedRunningTime="2026-01-21 07:10:22.926491712 +0000 UTC m=+964.156837614" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.630072 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q822p"] Jan 21 07:10:26 crc kubenswrapper[4893]: E0121 07:10:26.630831 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="pull" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.630850 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="pull" Jan 21 07:10:26 crc kubenswrapper[4893]: E0121 07:10:26.630879 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="extract" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.630886 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="extract" Jan 21 07:10:26 crc kubenswrapper[4893]: E0121 07:10:26.630897 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="util" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.630904 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="util" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.631020 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b962be1e-48b4-482c-a8a6-c6346dbdc835" containerName="extract" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.631864 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q822p" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.638948 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.639003 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.643980 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-f6p9z" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.649403 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q822p"] Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.787417 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7dt\" (UniqueName: \"kubernetes.io/projected/cf682009-b6d7-4665-ab3f-5894a39a3a09-kube-api-access-fl7dt\") pod \"nmstate-operator-646758c888-q822p\" (UID: \"cf682009-b6d7-4665-ab3f-5894a39a3a09\") " pod="openshift-nmstate/nmstate-operator-646758c888-q822p" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.888469 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7dt\" (UniqueName: \"kubernetes.io/projected/cf682009-b6d7-4665-ab3f-5894a39a3a09-kube-api-access-fl7dt\") pod \"nmstate-operator-646758c888-q822p\" (UID: \"cf682009-b6d7-4665-ab3f-5894a39a3a09\") " pod="openshift-nmstate/nmstate-operator-646758c888-q822p" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.906860 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7dt\" (UniqueName: \"kubernetes.io/projected/cf682009-b6d7-4665-ab3f-5894a39a3a09-kube-api-access-fl7dt\") pod \"nmstate-operator-646758c888-q822p\" (UID: \"cf682009-b6d7-4665-ab3f-5894a39a3a09\") " pod="openshift-nmstate/nmstate-operator-646758c888-q822p" Jan 21 07:10:26 crc kubenswrapper[4893]: I0121 07:10:26.948929 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q822p" Jan 21 07:10:27 crc kubenswrapper[4893]: I0121 07:10:27.427074 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q822p"] Jan 21 07:10:27 crc kubenswrapper[4893]: W0121 07:10:27.436965 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf682009_b6d7_4665_ab3f_5894a39a3a09.slice/crio-9b50eb083ff428908a2f20b5eb3f96417f1a097abe59574a533ed0784f770da8 WatchSource:0}: Error finding container 9b50eb083ff428908a2f20b5eb3f96417f1a097abe59574a533ed0784f770da8: Status 404 returned error can't find the container with id 9b50eb083ff428908a2f20b5eb3f96417f1a097abe59574a533ed0784f770da8 Jan 21 07:10:28 crc kubenswrapper[4893]: I0121 07:10:28.186654 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q822p" event={"ID":"cf682009-b6d7-4665-ab3f-5894a39a3a09","Type":"ContainerStarted","Data":"9b50eb083ff428908a2f20b5eb3f96417f1a097abe59574a533ed0784f770da8"} Jan 21 07:10:28 crc kubenswrapper[4893]: I0121 07:10:28.459859 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:28 crc kubenswrapper[4893]: I0121 07:10:28.459941 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:28 crc kubenswrapper[4893]: I0121 07:10:28.656766 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:10:28 crc kubenswrapper[4893]: I0121 07:10:28.656840 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:10:29 crc kubenswrapper[4893]: I0121 07:10:29.512126 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p7xdl" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="registry-server" probeResult="failure" output=< Jan 21 07:10:29 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:10:29 crc kubenswrapper[4893]: > Jan 21 07:10:31 crc kubenswrapper[4893]: I0121 07:10:31.205587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q822p" event={"ID":"cf682009-b6d7-4665-ab3f-5894a39a3a09","Type":"ContainerStarted","Data":"df6e0f784b8ffc5bddf40dc905501625f2141c8b875cdc98e56f872deb676705"} Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.502433 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-q822p" podStartSLOduration=7.352413516 podStartE2EDuration="10.502401733s" podCreationTimestamp="2026-01-21 07:10:26 +0000 UTC" firstStartedPulling="2026-01-21 07:10:27.618745008 +0000 UTC m=+968.849090910" lastFinishedPulling="2026-01-21 07:10:30.768733225 +0000 UTC m=+971.999079127" observedRunningTime="2026-01-21 07:10:31.234139449 +0000 UTC m=+972.464485351" watchObservedRunningTime="2026-01-21 07:10:36.502401733 +0000 UTC m=+977.732747635" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.503216 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5l8kf"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.504080 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.506984 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c4l7z" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.520318 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5l8kf"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.524543 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.525487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.531291 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.551469 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.565237 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.566577 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9bmdw"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.567024 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.567346 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.600103 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.686364 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.687127 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.689536 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-djztb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.690261 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.690392 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.701376 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.702916 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9828\" (UniqueName: \"kubernetes.io/projected/187ac6cf-a917-4345-983b-a806aa8906b9-kube-api-access-k9828\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.702957 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtc7j\" (UniqueName: \"kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.702989 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-ovs-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703022 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703056 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/187ac6cf-a917-4345-983b-a806aa8906b9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-nmstate-lock\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703124 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5657\" (UniqueName: \"kubernetes.io/projected/df7d5aed-8a1d-4936-a9a2-75d9d2228de5-kube-api-access-j5657\") pod \"nmstate-metrics-54757c584b-5l8kf\" (UID: \"df7d5aed-8a1d-4936-a9a2-75d9d2228de5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703209 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-dbus-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703277 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzm4\" (UniqueName: \"kubernetes.io/projected/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-kube-api-access-wlzm4\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.703302 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.804991 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805071 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805172 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzm4\" (UniqueName: \"kubernetes.io/projected/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-kube-api-access-wlzm4\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805243 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9828\" (UniqueName: \"kubernetes.io/projected/187ac6cf-a917-4345-983b-a806aa8906b9-kube-api-access-k9828\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805284 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtc7j\" (UniqueName: \"kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-ovs-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805404 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805438 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/187ac6cf-a917-4345-983b-a806aa8906b9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805468 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-nmstate-lock\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805543 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5657\" (UniqueName: \"kubernetes.io/projected/df7d5aed-8a1d-4936-a9a2-75d9d2228de5-kube-api-access-j5657\") pod \"nmstate-metrics-54757c584b-5l8kf\" (UID: \"df7d5aed-8a1d-4936-a9a2-75d9d2228de5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805652 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805692 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-dbus-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.805754 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64gj\" (UniqueName: \"kubernetes.io/projected/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-kube-api-access-l64gj\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.806008 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.806149 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-ovs-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.806293 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-nmstate-lock\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.806426 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-dbus-socket\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.813156 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/187ac6cf-a917-4345-983b-a806aa8906b9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.829029 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtc7j\" (UniqueName: \"kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j\") pod \"redhat-marketplace-rf6vn\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.831466 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzm4\" (UniqueName: \"kubernetes.io/projected/dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61-kube-api-access-wlzm4\") pod \"nmstate-handler-9bmdw\" (UID: \"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61\") " pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.835478 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5657\" (UniqueName: \"kubernetes.io/projected/df7d5aed-8a1d-4936-a9a2-75d9d2228de5-kube-api-access-j5657\") pod \"nmstate-metrics-54757c584b-5l8kf\" (UID: \"df7d5aed-8a1d-4936-a9a2-75d9d2228de5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.838464 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9828\" (UniqueName: \"kubernetes.io/projected/187ac6cf-a917-4345-983b-a806aa8906b9-kube-api-access-k9828\") pod \"nmstate-webhook-8474b5b9d8-j6t2h\" (UID: \"187ac6cf-a917-4345-983b-a806aa8906b9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.842276 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.854911 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.905030 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.907105 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64gj\" (UniqueName: \"kubernetes.io/projected/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-kube-api-access-l64gj\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.907204 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.907932 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.908746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.911381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.914954 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fc6bd95b4-cpv9r"] Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.917180 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:36 crc kubenswrapper[4893]: I0121 07:10:36.925189 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64gj\" (UniqueName: \"kubernetes.io/projected/594278ba-8824-49d6-9b6d-a5a0e8dd66ae-kube-api-access-l64gj\") pod \"nmstate-console-plugin-7754f76f8b-mp8lb\" (UID: \"594278ba-8824-49d6-9b6d-a5a0e8dd66ae\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.028935 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.033903 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034003 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034036 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-service-ca\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034059 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-trusted-ca-bundle\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034077 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-oauth-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034114 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmhs\" (UniqueName: \"kubernetes.io/projected/d2f7a064-7413-4b20-bbe9-9849e2647a7f-kube-api-access-kvmhs\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034147 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-oauth-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.034575 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.048748 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc6bd95b4-cpv9r"] Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135093 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-service-ca\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135412 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-trusted-ca-bundle\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-oauth-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135463 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmhs\" (UniqueName: \"kubernetes.io/projected/d2f7a064-7413-4b20-bbe9-9849e2647a7f-kube-api-access-kvmhs\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135491 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-oauth-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.135528 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.137659 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-trusted-ca-bundle\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.137717 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.137896 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-service-ca\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.137949 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d2f7a064-7413-4b20-bbe9-9849e2647a7f-oauth-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.141922 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-oauth-config\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.243123 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f7a064-7413-4b20-bbe9-9849e2647a7f-console-serving-cert\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.245200 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmhs\" (UniqueName: \"kubernetes.io/projected/d2f7a064-7413-4b20-bbe9-9849e2647a7f-kube-api-access-kvmhs\") pod \"console-6fc6bd95b4-cpv9r\" (UID: \"d2f7a064-7413-4b20-bbe9-9849e2647a7f\") " pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.249758 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9bmdw" event={"ID":"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61","Type":"ContainerStarted","Data":"eea9fb94d0ad2e94eb06a96f4624382a0d98d41533bf5ccc765f8d92df47a99c"} Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.333171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.369430 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5l8kf"] Jan 21 07:10:37 crc kubenswrapper[4893]: W0121 07:10:37.381941 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf7d5aed_8a1d_4936_a9a2_75d9d2228de5.slice/crio-56910d71b84a588a072aa2c1882e9b6fb7197bdf21cb7b2148b484526d7699ed WatchSource:0}: Error finding container 56910d71b84a588a072aa2c1882e9b6fb7197bdf21cb7b2148b484526d7699ed: Status 404 returned error can't find the container with id 56910d71b84a588a072aa2c1882e9b6fb7197bdf21cb7b2148b484526d7699ed Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.710548 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h"] Jan 21 07:10:37 crc kubenswrapper[4893]: W0121 07:10:37.720145 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod187ac6cf_a917_4345_983b_a806aa8906b9.slice/crio-2eccf2f6ad775091a622c05cbac1d52444ce27ba42210558e74e529210071e26 WatchSource:0}: Error finding container 2eccf2f6ad775091a622c05cbac1d52444ce27ba42210558e74e529210071e26: Status 404 returned error can't find the container with id 2eccf2f6ad775091a622c05cbac1d52444ce27ba42210558e74e529210071e26 Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.738769 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb"] Jan 21 07:10:37 crc kubenswrapper[4893]: W0121 07:10:37.739276 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod098bf00d_a656_4b84_8f75_c92e8d53c870.slice/crio-90d48b4ccd4d705e74567255d27f716451f16fbe388fee2145c969376312fec2 WatchSource:0}: Error finding container 90d48b4ccd4d705e74567255d27f716451f16fbe388fee2145c969376312fec2: Status 404 returned error can't find the container with id 90d48b4ccd4d705e74567255d27f716451f16fbe388fee2145c969376312fec2 Jan 21 07:10:37 crc kubenswrapper[4893]: W0121 07:10:37.740926 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594278ba_8824_49d6_9b6d_a5a0e8dd66ae.slice/crio-6af24b9948412ffa33dae5e106f5049862ee4467958f8379a5acb00236ec4ed0 WatchSource:0}: Error finding container 6af24b9948412ffa33dae5e106f5049862ee4467958f8379a5acb00236ec4ed0: Status 404 returned error can't find the container with id 6af24b9948412ffa33dae5e106f5049862ee4467958f8379a5acb00236ec4ed0 Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.744619 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:37 crc kubenswrapper[4893]: I0121 07:10:37.830231 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc6bd95b4-cpv9r"] Jan 21 07:10:37 crc kubenswrapper[4893]: W0121 07:10:37.840520 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f7a064_7413_4b20_bbe9_9849e2647a7f.slice/crio-fff79db83901f76929c212d45156da41bb2245b2311351687cab3dab312021e7 WatchSource:0}: Error finding container fff79db83901f76929c212d45156da41bb2245b2311351687cab3dab312021e7: Status 404 returned error can't find the container with id fff79db83901f76929c212d45156da41bb2245b2311351687cab3dab312021e7 Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.256851 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" event={"ID":"df7d5aed-8a1d-4936-a9a2-75d9d2228de5","Type":"ContainerStarted","Data":"56910d71b84a588a072aa2c1882e9b6fb7197bdf21cb7b2148b484526d7699ed"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.258294 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" event={"ID":"187ac6cf-a917-4345-983b-a806aa8906b9","Type":"ContainerStarted","Data":"2eccf2f6ad775091a622c05cbac1d52444ce27ba42210558e74e529210071e26"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.260249 4893 generic.go:334] "Generic (PLEG): container finished" podID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerID="9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422" exitCode=0 Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.260346 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerDied","Data":"9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.260408 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerStarted","Data":"90d48b4ccd4d705e74567255d27f716451f16fbe388fee2145c969376312fec2"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.266013 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc6bd95b4-cpv9r" event={"ID":"d2f7a064-7413-4b20-bbe9-9849e2647a7f","Type":"ContainerStarted","Data":"b4484717192b5d73ab16084921625a039bcdfdf4e10dad5b9d092f1bd9a95eda"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.266117 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc6bd95b4-cpv9r" event={"ID":"d2f7a064-7413-4b20-bbe9-9849e2647a7f","Type":"ContainerStarted","Data":"fff79db83901f76929c212d45156da41bb2245b2311351687cab3dab312021e7"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.267945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" event={"ID":"594278ba-8824-49d6-9b6d-a5a0e8dd66ae","Type":"ContainerStarted","Data":"6af24b9948412ffa33dae5e106f5049862ee4467958f8379a5acb00236ec4ed0"} Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.300757 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fc6bd95b4-cpv9r" podStartSLOduration=2.300713025 podStartE2EDuration="2.300713025s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:10:38.299645375 +0000 UTC m=+979.529991287" watchObservedRunningTime="2026-01-21 07:10:38.300713025 +0000 UTC m=+979.531058927" Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.528984 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:38 crc kubenswrapper[4893]: I0121 07:10:38.578479 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:40 crc kubenswrapper[4893]: I0121 07:10:40.924760 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:40 crc kubenswrapper[4893]: I0121 07:10:40.925289 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7xdl" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="registry-server" containerID="cri-o://8b45c91ad8aef0af84c6ea577b5bd3695102b3ea56dcc05e5ea1c63641637666" gracePeriod=2 Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.297323 4893 generic.go:334] "Generic (PLEG): container finished" podID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerID="8b45c91ad8aef0af84c6ea577b5bd3695102b3ea56dcc05e5ea1c63641637666" exitCode=0 Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.297389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerDied","Data":"8b45c91ad8aef0af84c6ea577b5bd3695102b3ea56dcc05e5ea1c63641637666"} Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.510275 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.725529 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6drjj\" (UniqueName: \"kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj\") pod \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.725618 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content\") pod \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.725655 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities\") pod \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\" (UID: \"c452b1a4-8e07-4d7b-a683-b310ec46eb94\") " Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.731822 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj" (OuterVolumeSpecName: "kube-api-access-6drjj") pod "c452b1a4-8e07-4d7b-a683-b310ec46eb94" (UID: "c452b1a4-8e07-4d7b-a683-b310ec46eb94"). InnerVolumeSpecName "kube-api-access-6drjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.754122 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities" (OuterVolumeSpecName: "utilities") pod "c452b1a4-8e07-4d7b-a683-b310ec46eb94" (UID: "c452b1a4-8e07-4d7b-a683-b310ec46eb94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.827651 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6drjj\" (UniqueName: \"kubernetes.io/projected/c452b1a4-8e07-4d7b-a683-b310ec46eb94-kube-api-access-6drjj\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.828425 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.889055 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c452b1a4-8e07-4d7b-a683-b310ec46eb94" (UID: "c452b1a4-8e07-4d7b-a683-b310ec46eb94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:41 crc kubenswrapper[4893]: I0121 07:10:41.929862 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c452b1a4-8e07-4d7b-a683-b310ec46eb94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.305927 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7xdl" event={"ID":"c452b1a4-8e07-4d7b-a683-b310ec46eb94","Type":"ContainerDied","Data":"b72f8f86a08eb4e0ada366f3e01941574dc9728af21f352be083e13a8a147728"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.306014 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7xdl" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.306046 4893 scope.go:117] "RemoveContainer" containerID="8b45c91ad8aef0af84c6ea577b5bd3695102b3ea56dcc05e5ea1c63641637666" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.307445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" event={"ID":"df7d5aed-8a1d-4936-a9a2-75d9d2228de5","Type":"ContainerStarted","Data":"4d82cee497054c711b110df9a0eb3c00cedb8d4f03ad57b334e4da4109531946"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.308854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" event={"ID":"187ac6cf-a917-4345-983b-a806aa8906b9","Type":"ContainerStarted","Data":"6ab5ad2737e8c6f4582a1e16e6931a234b04997d4b463a3a563ab9135ed9e044"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.308968 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.311650 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9bmdw" event={"ID":"dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61","Type":"ContainerStarted","Data":"76b5b1163daca554fa6ace68b3d987baf0d2615fe91da5493624b7525106ac66"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.311709 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.314830 4893 generic.go:334] "Generic (PLEG): container finished" podID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerID="a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad" exitCode=0 Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.314899 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerDied","Data":"a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.317315 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" event={"ID":"594278ba-8824-49d6-9b6d-a5a0e8dd66ae","Type":"ContainerStarted","Data":"de69da68c6f49e41284ca8ce97b997f7a79b5553778f32867a846053e3e94ef7"} Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.323640 4893 scope.go:117] "RemoveContainer" containerID="3e45dfceeb80ce0786520740073131bc47af5ccc0e28aa6c14c07cc6e7c4ccca" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.333240 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" podStartSLOduration=2.8410908409999998 podStartE2EDuration="6.333218928s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="2026-01-21 07:10:37.725452309 +0000 UTC m=+978.955798211" lastFinishedPulling="2026-01-21 07:10:41.217580396 +0000 UTC m=+982.447926298" observedRunningTime="2026-01-21 07:10:42.328240726 +0000 UTC m=+983.558586628" watchObservedRunningTime="2026-01-21 07:10:42.333218928 +0000 UTC m=+983.563564830" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.361054 4893 scope.go:117] "RemoveContainer" containerID="8f712919cdae28cbf4cd96a9579fcbbc2b9c9839197572cf5709cea62ad271d4" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.380477 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9bmdw" podStartSLOduration=2.285406616 podStartE2EDuration="6.380457584s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="2026-01-21 07:10:37.124487374 +0000 UTC m=+978.354833276" lastFinishedPulling="2026-01-21 07:10:41.219538342 +0000 UTC m=+982.449884244" observedRunningTime="2026-01-21 07:10:42.377429827 +0000 UTC m=+983.607775729" watchObservedRunningTime="2026-01-21 07:10:42.380457584 +0000 UTC m=+983.610803486" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.404590 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.418132 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7xdl"] Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.420558 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mp8lb" podStartSLOduration=2.940940397 podStartE2EDuration="6.420526384s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="2026-01-21 07:10:37.743661401 +0000 UTC m=+978.974007303" lastFinishedPulling="2026-01-21 07:10:41.223247388 +0000 UTC m=+982.453593290" observedRunningTime="2026-01-21 07:10:42.406711127 +0000 UTC m=+983.637057029" watchObservedRunningTime="2026-01-21 07:10:42.420526384 +0000 UTC m=+983.650872286" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.739692 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:42 crc kubenswrapper[4893]: E0121 07:10:42.740329 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="registry-server" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.740346 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="registry-server" Jan 21 07:10:42 crc kubenswrapper[4893]: E0121 07:10:42.740359 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="extract-content" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.740367 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="extract-content" Jan 21 07:10:42 crc kubenswrapper[4893]: E0121 07:10:42.740385 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="extract-utilities" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.740396 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="extract-utilities" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.740529 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" containerName="registry-server" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.741656 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.774464 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.842654 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrfx\" (UniqueName: \"kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.842753 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.842779 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.945277 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrfx\" (UniqueName: \"kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.945351 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.945375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.946044 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.946252 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:42 crc kubenswrapper[4893]: I0121 07:10:42.979562 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrfx\" (UniqueName: \"kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx\") pod \"community-operators-s4w8n\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:43 crc kubenswrapper[4893]: I0121 07:10:43.109327 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:43 crc kubenswrapper[4893]: I0121 07:10:43.357979 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerStarted","Data":"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b"} Jan 21 07:10:43 crc kubenswrapper[4893]: I0121 07:10:43.384101 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rf6vn" podStartSLOduration=2.963169606 podStartE2EDuration="7.384080023s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="2026-01-21 07:10:38.265427544 +0000 UTC m=+979.495773436" lastFinishedPulling="2026-01-21 07:10:42.686337961 +0000 UTC m=+983.916683853" observedRunningTime="2026-01-21 07:10:43.38257661 +0000 UTC m=+984.612922512" watchObservedRunningTime="2026-01-21 07:10:43.384080023 +0000 UTC m=+984.614425925" Jan 21 07:10:43 crc kubenswrapper[4893]: I0121 07:10:43.490718 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:43 crc kubenswrapper[4893]: I0121 07:10:43.596743 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c452b1a4-8e07-4d7b-a683-b310ec46eb94" path="/var/lib/kubelet/pods/c452b1a4-8e07-4d7b-a683-b310ec46eb94/volumes" Jan 21 07:10:44 crc kubenswrapper[4893]: I0121 07:10:44.369618 4893 generic.go:334] "Generic (PLEG): container finished" podID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerID="7ea0e595aec5f886758cfcbc5f23461e909bae17cfadda4297836ac50a14fb02" exitCode=0 Jan 21 07:10:44 crc kubenswrapper[4893]: I0121 07:10:44.369818 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerDied","Data":"7ea0e595aec5f886758cfcbc5f23461e909bae17cfadda4297836ac50a14fb02"} Jan 21 07:10:44 crc kubenswrapper[4893]: I0121 07:10:44.370054 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerStarted","Data":"ed2f2ce1e276df029bc2d59d7c46464ec66f3fc841dd821b07339711c9d59484"} Jan 21 07:10:45 crc kubenswrapper[4893]: I0121 07:10:45.378067 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" event={"ID":"df7d5aed-8a1d-4936-a9a2-75d9d2228de5","Type":"ContainerStarted","Data":"5339faa23e4727148b7d44417bf2b37d8480b032ffda3439c2dec73bc4efd727"} Jan 21 07:10:45 crc kubenswrapper[4893]: I0121 07:10:45.404246 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-5l8kf" podStartSLOduration=2.143842305 podStartE2EDuration="9.404173931s" podCreationTimestamp="2026-01-21 07:10:36 +0000 UTC" firstStartedPulling="2026-01-21 07:10:37.384812614 +0000 UTC m=+978.615158526" lastFinishedPulling="2026-01-21 07:10:44.64514425 +0000 UTC m=+985.875490152" observedRunningTime="2026-01-21 07:10:45.396237733 +0000 UTC m=+986.626583645" watchObservedRunningTime="2026-01-21 07:10:45.404173931 +0000 UTC m=+986.634519833" Jan 21 07:10:46 crc kubenswrapper[4893]: I0121 07:10:46.386748 4893 generic.go:334] "Generic (PLEG): container finished" podID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerID="fc99edaa6f177382865b1ff789e795c12f6a186834ae5253d99f9127b17a0688" exitCode=0 Jan 21 07:10:46 crc kubenswrapper[4893]: I0121 07:10:46.388303 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerDied","Data":"fc99edaa6f177382865b1ff789e795c12f6a186834ae5253d99f9127b17a0688"} Jan 21 07:10:46 crc kubenswrapper[4893]: I0121 07:10:46.906227 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:46 crc kubenswrapper[4893]: I0121 07:10:46.906500 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:46 crc kubenswrapper[4893]: I0121 07:10:46.948528 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.050546 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9bmdw" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.355600 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.355712 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.361441 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.397173 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerStarted","Data":"34808449e49f36ceb4544386cf82ad03f3a66a00a213bb5413ea79c55115ce8a"} Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.401493 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fc6bd95b4-cpv9r" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.420418 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4w8n" podStartSLOduration=3.213111778 podStartE2EDuration="5.420391245s" podCreationTimestamp="2026-01-21 07:10:42 +0000 UTC" firstStartedPulling="2026-01-21 07:10:44.573284078 +0000 UTC m=+985.803629970" lastFinishedPulling="2026-01-21 07:10:46.780563535 +0000 UTC m=+988.010909437" observedRunningTime="2026-01-21 07:10:47.418399168 +0000 UTC m=+988.648745070" watchObservedRunningTime="2026-01-21 07:10:47.420391245 +0000 UTC m=+988.650737147" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.444652 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:47 crc kubenswrapper[4893]: I0121 07:10:47.470384 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 07:10:49 crc kubenswrapper[4893]: I0121 07:10:49.326024 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:49 crc kubenswrapper[4893]: I0121 07:10:49.409037 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rf6vn" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="registry-server" containerID="cri-o://b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b" gracePeriod=2 Jan 21 07:10:49 crc kubenswrapper[4893]: I0121 07:10:49.933609 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.070387 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities\") pod \"098bf00d-a656-4b84-8f75-c92e8d53c870\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.070999 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtc7j\" (UniqueName: \"kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j\") pod \"098bf00d-a656-4b84-8f75-c92e8d53c870\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.071101 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content\") pod \"098bf00d-a656-4b84-8f75-c92e8d53c870\" (UID: \"098bf00d-a656-4b84-8f75-c92e8d53c870\") " Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.071618 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities" (OuterVolumeSpecName: "utilities") pod "098bf00d-a656-4b84-8f75-c92e8d53c870" (UID: "098bf00d-a656-4b84-8f75-c92e8d53c870"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.078735 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j" (OuterVolumeSpecName: "kube-api-access-rtc7j") pod "098bf00d-a656-4b84-8f75-c92e8d53c870" (UID: "098bf00d-a656-4b84-8f75-c92e8d53c870"). InnerVolumeSpecName "kube-api-access-rtc7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.093573 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "098bf00d-a656-4b84-8f75-c92e8d53c870" (UID: "098bf00d-a656-4b84-8f75-c92e8d53c870"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.172666 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtc7j\" (UniqueName: \"kubernetes.io/projected/098bf00d-a656-4b84-8f75-c92e8d53c870-kube-api-access-rtc7j\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.172804 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.172817 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098bf00d-a656-4b84-8f75-c92e8d53c870-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.417514 4893 generic.go:334] "Generic (PLEG): container finished" podID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerID="b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b" exitCode=0 Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.417563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerDied","Data":"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b"} Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.417594 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rf6vn" event={"ID":"098bf00d-a656-4b84-8f75-c92e8d53c870","Type":"ContainerDied","Data":"90d48b4ccd4d705e74567255d27f716451f16fbe388fee2145c969376312fec2"} Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.417640 4893 scope.go:117] "RemoveContainer" containerID="b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.417778 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rf6vn" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.450269 4893 scope.go:117] "RemoveContainer" containerID="a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.452971 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.456740 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rf6vn"] Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.467589 4893 scope.go:117] "RemoveContainer" containerID="9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.487246 4893 scope.go:117] "RemoveContainer" containerID="b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b" Jan 21 07:10:50 crc kubenswrapper[4893]: E0121 07:10:50.487814 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b\": container with ID starting with b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b not found: ID does not exist" containerID="b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.487871 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b"} err="failed to get container status \"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b\": rpc error: code = NotFound desc = could not find container \"b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b\": container with ID starting with b46fbbd11293f63f4c784e2161b4d8c570c39ec75bf9260785ac32d5658e773b not found: ID does not exist" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.487903 4893 scope.go:117] "RemoveContainer" containerID="a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad" Jan 21 07:10:50 crc kubenswrapper[4893]: E0121 07:10:50.488259 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad\": container with ID starting with a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad not found: ID does not exist" containerID="a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.488299 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad"} err="failed to get container status \"a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad\": rpc error: code = NotFound desc = could not find container \"a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad\": container with ID starting with a0ae35d00aff8c23315b7d4bd0519f29db98d96c4d5132aea6067157547094ad not found: ID does not exist" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.488331 4893 scope.go:117] "RemoveContainer" containerID="9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422" Jan 21 07:10:50 crc kubenswrapper[4893]: E0121 07:10:50.488585 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422\": container with ID starting with 9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422 not found: ID does not exist" containerID="9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422" Jan 21 07:10:50 crc kubenswrapper[4893]: I0121 07:10:50.488616 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422"} err="failed to get container status \"9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422\": rpc error: code = NotFound desc = could not find container \"9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422\": container with ID starting with 9b8d77db98e932689a977de26a76a1c71b7582a9557425d98b265e798ebcd422 not found: ID does not exist" Jan 21 07:10:51 crc kubenswrapper[4893]: I0121 07:10:51.588346 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" path="/var/lib/kubelet/pods/098bf00d-a656-4b84-8f75-c92e8d53c870/volumes" Jan 21 07:10:53 crc kubenswrapper[4893]: I0121 07:10:53.109593 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:53 crc kubenswrapper[4893]: I0121 07:10:53.110105 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:53 crc kubenswrapper[4893]: I0121 07:10:53.176245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:53 crc kubenswrapper[4893]: I0121 07:10:53.483732 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:54 crc kubenswrapper[4893]: I0121 07:10:54.727243 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:55 crc kubenswrapper[4893]: I0121 07:10:55.452917 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4w8n" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="registry-server" containerID="cri-o://34808449e49f36ceb4544386cf82ad03f3a66a00a213bb5413ea79c55115ce8a" gracePeriod=2 Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.186113 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:10:56 crc kubenswrapper[4893]: E0121 07:10:56.186378 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="extract-content" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.186390 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="extract-content" Jan 21 07:10:56 crc kubenswrapper[4893]: E0121 07:10:56.186398 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="registry-server" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.186404 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="registry-server" Jan 21 07:10:56 crc kubenswrapper[4893]: E0121 07:10:56.186423 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="extract-utilities" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.186431 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="extract-utilities" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.186529 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="098bf00d-a656-4b84-8f75-c92e8d53c870" containerName="registry-server" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.187350 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.195296 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.273222 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.273313 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.273340 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sgfc\" (UniqueName: \"kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.375039 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.375090 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sgfc\" (UniqueName: \"kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.375143 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.375642 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.375815 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.399540 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sgfc\" (UniqueName: \"kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc\") pod \"certified-operators-dd6fx\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.505003 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.854436 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:10:56 crc kubenswrapper[4893]: I0121 07:10:56.863989 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j6t2h" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.465590 4893 generic.go:334] "Generic (PLEG): container finished" podID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerID="23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092" exitCode=0 Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.465652 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerDied","Data":"23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092"} Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.465692 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerStarted","Data":"89291103d993ca3e76e53a6f71c3d6a75a2d42cf76b150870aba3c60e01ad474"} Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.472149 4893 generic.go:334] "Generic (PLEG): container finished" podID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerID="34808449e49f36ceb4544386cf82ad03f3a66a00a213bb5413ea79c55115ce8a" exitCode=0 Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.472203 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerDied","Data":"34808449e49f36ceb4544386cf82ad03f3a66a00a213bb5413ea79c55115ce8a"} Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.720284 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.874781 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content\") pod \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.874841 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities\") pod \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.875657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities" (OuterVolumeSpecName: "utilities") pod "55d8a84a-c3bb-4ad7-b0b1-2353a801c139" (UID: "55d8a84a-c3bb-4ad7-b0b1-2353a801c139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.875835 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrfx\" (UniqueName: \"kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx\") pod \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\" (UID: \"55d8a84a-c3bb-4ad7-b0b1-2353a801c139\") " Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.876874 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.886895 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx" (OuterVolumeSpecName: "kube-api-access-gwrfx") pod "55d8a84a-c3bb-4ad7-b0b1-2353a801c139" (UID: "55d8a84a-c3bb-4ad7-b0b1-2353a801c139"). InnerVolumeSpecName "kube-api-access-gwrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.923648 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55d8a84a-c3bb-4ad7-b0b1-2353a801c139" (UID: "55d8a84a-c3bb-4ad7-b0b1-2353a801c139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.977477 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrfx\" (UniqueName: \"kubernetes.io/projected/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-kube-api-access-gwrfx\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:57 crc kubenswrapper[4893]: I0121 07:10:57.977527 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d8a84a-c3bb-4ad7-b0b1-2353a801c139-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.483728 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4w8n" event={"ID":"55d8a84a-c3bb-4ad7-b0b1-2353a801c139","Type":"ContainerDied","Data":"ed2f2ce1e276df029bc2d59d7c46464ec66f3fc841dd821b07339711c9d59484"} Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.484056 4893 scope.go:117] "RemoveContainer" containerID="34808449e49f36ceb4544386cf82ad03f3a66a00a213bb5413ea79c55115ce8a" Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.484187 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4w8n" Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.503204 4893 scope.go:117] "RemoveContainer" containerID="fc99edaa6f177382865b1ff789e795c12f6a186834ae5253d99f9127b17a0688" Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.515630 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.521068 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4w8n"] Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.540035 4893 scope.go:117] "RemoveContainer" containerID="7ea0e595aec5f886758cfcbc5f23461e909bae17cfadda4297836ac50a14fb02" Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.657363 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:10:58 crc kubenswrapper[4893]: I0121 07:10:58.657439 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:10:59 crc kubenswrapper[4893]: I0121 07:10:59.492602 4893 generic.go:334] "Generic (PLEG): container finished" podID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerID="1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255" exitCode=0 Jan 21 07:10:59 crc kubenswrapper[4893]: I0121 07:10:59.492691 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerDied","Data":"1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255"} Jan 21 07:10:59 crc kubenswrapper[4893]: I0121 07:10:59.590456 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" path="/var/lib/kubelet/pods/55d8a84a-c3bb-4ad7-b0b1-2353a801c139/volumes" Jan 21 07:11:00 crc kubenswrapper[4893]: I0121 07:11:00.503822 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerStarted","Data":"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355"} Jan 21 07:11:00 crc kubenswrapper[4893]: I0121 07:11:00.525972 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dd6fx" podStartSLOduration=2.047531118 podStartE2EDuration="4.525948646s" podCreationTimestamp="2026-01-21 07:10:56 +0000 UTC" firstStartedPulling="2026-01-21 07:10:57.468323127 +0000 UTC m=+998.698669029" lastFinishedPulling="2026-01-21 07:10:59.946740655 +0000 UTC m=+1001.177086557" observedRunningTime="2026-01-21 07:11:00.52504884 +0000 UTC m=+1001.755394742" watchObservedRunningTime="2026-01-21 07:11:00.525948646 +0000 UTC m=+1001.756294548" Jan 21 07:11:06 crc kubenswrapper[4893]: I0121 07:11:06.505658 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:06 crc kubenswrapper[4893]: I0121 07:11:06.506269 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:06 crc kubenswrapper[4893]: I0121 07:11:06.561070 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:06 crc kubenswrapper[4893]: I0121 07:11:06.603169 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:06 crc kubenswrapper[4893]: I0121 07:11:06.846165 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:11:08 crc kubenswrapper[4893]: I0121 07:11:08.582368 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dd6fx" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="registry-server" containerID="cri-o://053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355" gracePeriod=2 Jan 21 07:11:08 crc kubenswrapper[4893]: I0121 07:11:08.990740 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.013078 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sgfc\" (UniqueName: \"kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc\") pod \"c758827d-dec6-40c9-b332-2af6d7ef206e\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.013250 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content\") pod \"c758827d-dec6-40c9-b332-2af6d7ef206e\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.013351 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities\") pod \"c758827d-dec6-40c9-b332-2af6d7ef206e\" (UID: \"c758827d-dec6-40c9-b332-2af6d7ef206e\") " Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.014470 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities" (OuterVolumeSpecName: "utilities") pod "c758827d-dec6-40c9-b332-2af6d7ef206e" (UID: "c758827d-dec6-40c9-b332-2af6d7ef206e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.020169 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc" (OuterVolumeSpecName: "kube-api-access-9sgfc") pod "c758827d-dec6-40c9-b332-2af6d7ef206e" (UID: "c758827d-dec6-40c9-b332-2af6d7ef206e"). InnerVolumeSpecName "kube-api-access-9sgfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.115136 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sgfc\" (UniqueName: \"kubernetes.io/projected/c758827d-dec6-40c9-b332-2af6d7ef206e-kube-api-access-9sgfc\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.115170 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.259329 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c758827d-dec6-40c9-b332-2af6d7ef206e" (UID: "c758827d-dec6-40c9-b332-2af6d7ef206e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.317341 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c758827d-dec6-40c9-b332-2af6d7ef206e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.590900 4893 generic.go:334] "Generic (PLEG): container finished" podID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerID="053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355" exitCode=0 Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.590949 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerDied","Data":"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355"} Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.590980 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dd6fx" event={"ID":"c758827d-dec6-40c9-b332-2af6d7ef206e","Type":"ContainerDied","Data":"89291103d993ca3e76e53a6f71c3d6a75a2d42cf76b150870aba3c60e01ad474"} Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.590975 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.591031 4893 scope.go:117] "RemoveContainer" containerID="053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.621004 4893 scope.go:117] "RemoveContainer" containerID="1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.653501 4893 scope.go:117] "RemoveContainer" containerID="23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.670191 4893 scope.go:117] "RemoveContainer" containerID="053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355" Jan 21 07:11:09 crc kubenswrapper[4893]: E0121 07:11:09.670933 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355\": container with ID starting with 053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355 not found: ID does not exist" containerID="053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.670988 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355"} err="failed to get container status \"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355\": rpc error: code = NotFound desc = could not find container \"053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355\": container with ID starting with 053f48a5f6b1ec20de3344ebfe8aeb757523d8fa0b2b638a8bafa1d7f1348355 not found: ID does not exist" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.671020 4893 scope.go:117] "RemoveContainer" containerID="1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255" Jan 21 07:11:09 crc kubenswrapper[4893]: E0121 07:11:09.671490 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255\": container with ID starting with 1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255 not found: ID does not exist" containerID="1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.671540 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255"} err="failed to get container status \"1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255\": rpc error: code = NotFound desc = could not find container \"1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255\": container with ID starting with 1080d0691535f8b53c5fde3fd86526e7e1e2e22c0e611c16efe38447f5152255 not found: ID does not exist" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.671570 4893 scope.go:117] "RemoveContainer" containerID="23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092" Jan 21 07:11:09 crc kubenswrapper[4893]: E0121 07:11:09.672214 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092\": container with ID starting with 23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092 not found: ID does not exist" containerID="23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092" Jan 21 07:11:09 crc kubenswrapper[4893]: I0121 07:11:09.672243 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092"} err="failed to get container status \"23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092\": rpc error: code = NotFound desc = could not find container \"23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092\": container with ID starting with 23763044de338a428690aab1a2d344c8b8b78cdb6ce373b6dbcd71919f9df092 not found: ID does not exist" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084109 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw"] Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084652 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="extract-utilities" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084666 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="extract-utilities" Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084697 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="extract-utilities" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084706 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="extract-utilities" Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084717 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="extract-content" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084724 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="extract-content" Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084741 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084747 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084758 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084763 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: E0121 07:11:11.084772 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="extract-content" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084777 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="extract-content" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084915 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d8a84a-c3bb-4ad7-b0b1-2353a801c139" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.084930 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" containerName="registry-server" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.085916 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.087693 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.093874 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw"] Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.137733 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnxx\" (UniqueName: \"kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.137885 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.137946 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.239456 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.239521 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.239585 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffnxx\" (UniqueName: \"kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.240194 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.240279 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.271450 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffnxx\" (UniqueName: \"kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:11 crc kubenswrapper[4893]: I0121 07:11:11.438789 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:11.998138 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw"] Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.522911 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2k4nh" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" containerID="cri-o://3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae" gracePeriod=15 Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.611146 4893 generic.go:334] "Generic (PLEG): container finished" podID="37a85c97-b472-420e-bf43-80cd104a53b7" containerID="073eab69a6e41106679f304c1650bda260628ad568c5e180f2da62340e00749f" exitCode=0 Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.611220 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" event={"ID":"37a85c97-b472-420e-bf43-80cd104a53b7","Type":"ContainerDied","Data":"073eab69a6e41106679f304c1650bda260628ad568c5e180f2da62340e00749f"} Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.611291 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" event={"ID":"37a85c97-b472-420e-bf43-80cd104a53b7","Type":"ContainerStarted","Data":"1a5b3c5b31f65c307e40e9c0111382846797d0bf358c6643deef53de418185e2"} Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.874816 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2k4nh_198d5d30-97a4-4cc4-85be-4d930e84c2c6/console/0.log" Jan 21 07:11:12 crc kubenswrapper[4893]: I0121 07:11:12.875106 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bznvt\" (UniqueName: \"kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061156 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061185 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061204 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061229 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061265 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.061289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca\") pod \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\" (UID: \"198d5d30-97a4-4cc4-85be-4d930e84c2c6\") " Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.062275 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca" (OuterVolumeSpecName: "service-ca") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.062302 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.062374 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.062901 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config" (OuterVolumeSpecName: "console-config") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.069792 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.069828 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt" (OuterVolumeSpecName: "kube-api-access-bznvt") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "kube-api-access-bznvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.069947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "198d5d30-97a4-4cc4-85be-4d930e84c2c6" (UID: "198d5d30-97a4-4cc4-85be-4d930e84c2c6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162566 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bznvt\" (UniqueName: \"kubernetes.io/projected/198d5d30-97a4-4cc4-85be-4d930e84c2c6-kube-api-access-bznvt\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162611 4893 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162624 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162636 4893 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162648 4893 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162659 4893 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/198d5d30-97a4-4cc4-85be-4d930e84c2c6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.162696 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/198d5d30-97a4-4cc4-85be-4d930e84c2c6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624217 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2k4nh_198d5d30-97a4-4cc4-85be-4d930e84c2c6/console/0.log" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624269 4893 generic.go:334] "Generic (PLEG): container finished" podID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerID="3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae" exitCode=2 Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624302 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2k4nh" event={"ID":"198d5d30-97a4-4cc4-85be-4d930e84c2c6","Type":"ContainerDied","Data":"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae"} Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624340 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2k4nh" event={"ID":"198d5d30-97a4-4cc4-85be-4d930e84c2c6","Type":"ContainerDied","Data":"13f199120854bc21f43a6d99a8a84913826983929502e5682c73c710429cd826"} Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624358 4893 scope.go:117] "RemoveContainer" containerID="3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.624486 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2k4nh" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.648483 4893 scope.go:117] "RemoveContainer" containerID="3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae" Jan 21 07:11:13 crc kubenswrapper[4893]: E0121 07:11:13.649036 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae\": container with ID starting with 3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae not found: ID does not exist" containerID="3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.649111 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae"} err="failed to get container status \"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae\": rpc error: code = NotFound desc = could not find container \"3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae\": container with ID starting with 3573e629d3070fed409db3e04906ad3d91fa8878b8a360b7d6da62dfdbda3eae not found: ID does not exist" Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.653191 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 07:11:13 crc kubenswrapper[4893]: I0121 07:11:13.657318 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2k4nh"] Jan 21 07:11:14 crc kubenswrapper[4893]: I0121 07:11:14.633962 4893 generic.go:334] "Generic (PLEG): container finished" podID="37a85c97-b472-420e-bf43-80cd104a53b7" containerID="3b5a4f0fea766cf4476c2875fba3f8aa18aa0036d762e15dee6d110e0c77e74a" exitCode=0 Jan 21 07:11:14 crc kubenswrapper[4893]: I0121 07:11:14.634052 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" event={"ID":"37a85c97-b472-420e-bf43-80cd104a53b7","Type":"ContainerDied","Data":"3b5a4f0fea766cf4476c2875fba3f8aa18aa0036d762e15dee6d110e0c77e74a"} Jan 21 07:11:15 crc kubenswrapper[4893]: I0121 07:11:15.594001 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" path="/var/lib/kubelet/pods/198d5d30-97a4-4cc4-85be-4d930e84c2c6/volumes" Jan 21 07:11:15 crc kubenswrapper[4893]: I0121 07:11:15.646088 4893 generic.go:334] "Generic (PLEG): container finished" podID="37a85c97-b472-420e-bf43-80cd104a53b7" containerID="6cee84642d5ec47b6d732a215a72b349fdee0bf4ce3bcb2477bb987c717a8f41" exitCode=0 Jan 21 07:11:15 crc kubenswrapper[4893]: I0121 07:11:15.646141 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" event={"ID":"37a85c97-b472-420e-bf43-80cd104a53b7","Type":"ContainerDied","Data":"6cee84642d5ec47b6d732a215a72b349fdee0bf4ce3bcb2477bb987c717a8f41"} Jan 21 07:11:16 crc kubenswrapper[4893]: I0121 07:11:16.914427 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.111560 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util\") pod \"37a85c97-b472-420e-bf43-80cd104a53b7\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.111635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffnxx\" (UniqueName: \"kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx\") pod \"37a85c97-b472-420e-bf43-80cd104a53b7\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.111735 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle\") pod \"37a85c97-b472-420e-bf43-80cd104a53b7\" (UID: \"37a85c97-b472-420e-bf43-80cd104a53b7\") " Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.113297 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle" (OuterVolumeSpecName: "bundle") pod "37a85c97-b472-420e-bf43-80cd104a53b7" (UID: "37a85c97-b472-420e-bf43-80cd104a53b7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.119163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx" (OuterVolumeSpecName: "kube-api-access-ffnxx") pod "37a85c97-b472-420e-bf43-80cd104a53b7" (UID: "37a85c97-b472-420e-bf43-80cd104a53b7"). InnerVolumeSpecName "kube-api-access-ffnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.132420 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util" (OuterVolumeSpecName: "util") pod "37a85c97-b472-420e-bf43-80cd104a53b7" (UID: "37a85c97-b472-420e-bf43-80cd104a53b7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.212733 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-util\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.212781 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffnxx\" (UniqueName: \"kubernetes.io/projected/37a85c97-b472-420e-bf43-80cd104a53b7-kube-api-access-ffnxx\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.212795 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37a85c97-b472-420e-bf43-80cd104a53b7-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.662799 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" event={"ID":"37a85c97-b472-420e-bf43-80cd104a53b7","Type":"ContainerDied","Data":"1a5b3c5b31f65c307e40e9c0111382846797d0bf358c6643deef53de418185e2"} Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.662894 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a5b3c5b31f65c307e40e9c0111382846797d0bf358c6643deef53de418185e2" Jan 21 07:11:17 crc kubenswrapper[4893]: I0121 07:11:17.662890 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.713741 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v"] Jan 21 07:11:26 crc kubenswrapper[4893]: E0121 07:11:26.714545 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714558 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" Jan 21 07:11:26 crc kubenswrapper[4893]: E0121 07:11:26.714588 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="pull" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714595 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="pull" Jan 21 07:11:26 crc kubenswrapper[4893]: E0121 07:11:26.714602 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="extract" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714611 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="extract" Jan 21 07:11:26 crc kubenswrapper[4893]: E0121 07:11:26.714624 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="util" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714631 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="util" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714766 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="198d5d30-97a4-4cc4-85be-4d930e84c2c6" containerName="console" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.714780 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a85c97-b472-420e-bf43-80cd104a53b7" containerName="extract" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.715379 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.718273 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.718455 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nsw2m" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.718858 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.719006 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.719997 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.745605 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v"] Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.784323 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-webhook-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.784413 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-apiservice-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.784488 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hhr\" (UniqueName: \"kubernetes.io/projected/5b83e248-7f4d-4294-808c-91878658bf38-kube-api-access-d4hhr\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.885971 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-apiservice-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.886249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hhr\" (UniqueName: \"kubernetes.io/projected/5b83e248-7f4d-4294-808c-91878658bf38-kube-api-access-d4hhr\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.886372 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-webhook-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.894063 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-apiservice-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.906151 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b83e248-7f4d-4294-808c-91878658bf38-webhook-cert\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:26 crc kubenswrapper[4893]: I0121 07:11:26.906267 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hhr\" (UniqueName: \"kubernetes.io/projected/5b83e248-7f4d-4294-808c-91878658bf38-kube-api-access-d4hhr\") pod \"metallb-operator-controller-manager-d4c4497c9-rmz6v\" (UID: \"5b83e248-7f4d-4294-808c-91878658bf38\") " pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.041580 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.221541 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc"] Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.222728 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.226234 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.226531 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.226692 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nxgdq" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.250266 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc"] Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.307460 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-apiservice-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.307771 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-webhook-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.307839 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxs7\" (UniqueName: \"kubernetes.io/projected/ce067ec2-2d04-4566-8868-62c78e8c64f3-kube-api-access-pwxs7\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.409574 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxs7\" (UniqueName: \"kubernetes.io/projected/ce067ec2-2d04-4566-8868-62c78e8c64f3-kube-api-access-pwxs7\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.409659 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-apiservice-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.409708 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-webhook-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.415105 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-webhook-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.416323 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce067ec2-2d04-4566-8868-62c78e8c64f3-apiservice-cert\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.450515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxs7\" (UniqueName: \"kubernetes.io/projected/ce067ec2-2d04-4566-8868-62c78e8c64f3-kube-api-access-pwxs7\") pod \"metallb-operator-webhook-server-856864dc54-jk8lc\" (UID: \"ce067ec2-2d04-4566-8868-62c78e8c64f3\") " pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.544898 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v"] Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.545167 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:27 crc kubenswrapper[4893]: W0121 07:11:27.563198 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b83e248_7f4d_4294_808c_91878658bf38.slice/crio-bd93e84e77bfd72dca0f7e8c155f2d045386f19de4eef9140ae1dde11e878bb8 WatchSource:0}: Error finding container bd93e84e77bfd72dca0f7e8c155f2d045386f19de4eef9140ae1dde11e878bb8: Status 404 returned error can't find the container with id bd93e84e77bfd72dca0f7e8c155f2d045386f19de4eef9140ae1dde11e878bb8 Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.792436 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc"] Jan 21 07:11:27 crc kubenswrapper[4893]: W0121 07:11:27.800614 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce067ec2_2d04_4566_8868_62c78e8c64f3.slice/crio-571bd1595af2f97ae61178adcbea6f6ad4be081b01bab05e50c9402451a43010 WatchSource:0}: Error finding container 571bd1595af2f97ae61178adcbea6f6ad4be081b01bab05e50c9402451a43010: Status 404 returned error can't find the container with id 571bd1595af2f97ae61178adcbea6f6ad4be081b01bab05e50c9402451a43010 Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.835843 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" event={"ID":"ce067ec2-2d04-4566-8868-62c78e8c64f3","Type":"ContainerStarted","Data":"571bd1595af2f97ae61178adcbea6f6ad4be081b01bab05e50c9402451a43010"} Jan 21 07:11:27 crc kubenswrapper[4893]: I0121 07:11:27.837251 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" event={"ID":"5b83e248-7f4d-4294-808c-91878658bf38","Type":"ContainerStarted","Data":"bd93e84e77bfd72dca0f7e8c155f2d045386f19de4eef9140ae1dde11e878bb8"} Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.657199 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.657566 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.657622 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.658394 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.658489 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f" gracePeriod=600 Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.877130 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f" exitCode=0 Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.877193 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f"} Jan 21 07:11:28 crc kubenswrapper[4893]: I0121 07:11:28.877269 4893 scope.go:117] "RemoveContainer" containerID="2b8b36cbe0c34c88d5b3d7c8c6f4a8601dcf5f1759572299d0f737820558f3ba" Jan 21 07:11:29 crc kubenswrapper[4893]: I0121 07:11:29.887335 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2"} Jan 21 07:11:33 crc kubenswrapper[4893]: I0121 07:11:33.133512 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" event={"ID":"5b83e248-7f4d-4294-808c-91878658bf38","Type":"ContainerStarted","Data":"6054f76417fca3ee50cbc14ebdf02b351ac987066394505afdc93eb990155352"} Jan 21 07:11:33 crc kubenswrapper[4893]: I0121 07:11:33.134012 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:11:33 crc kubenswrapper[4893]: I0121 07:11:33.169180 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" podStartSLOduration=2.4688157090000002 podStartE2EDuration="7.169083623s" podCreationTimestamp="2026-01-21 07:11:26 +0000 UTC" firstStartedPulling="2026-01-21 07:11:27.568998958 +0000 UTC m=+1028.799344860" lastFinishedPulling="2026-01-21 07:11:32.269266872 +0000 UTC m=+1033.499612774" observedRunningTime="2026-01-21 07:11:33.162210756 +0000 UTC m=+1034.392556658" watchObservedRunningTime="2026-01-21 07:11:33.169083623 +0000 UTC m=+1034.399429545" Jan 21 07:11:36 crc kubenswrapper[4893]: I0121 07:11:36.177268 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" event={"ID":"ce067ec2-2d04-4566-8868-62c78e8c64f3","Type":"ContainerStarted","Data":"917989605b6929c61dbbd6150e515e271ba7b4371447583d6f1d5145f61d3d3c"} Jan 21 07:11:36 crc kubenswrapper[4893]: I0121 07:11:36.177780 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:11:36 crc kubenswrapper[4893]: I0121 07:11:36.203186 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" podStartSLOduration=1.778672093 podStartE2EDuration="9.203161345s" podCreationTimestamp="2026-01-21 07:11:27 +0000 UTC" firstStartedPulling="2026-01-21 07:11:27.803376524 +0000 UTC m=+1029.033722426" lastFinishedPulling="2026-01-21 07:11:35.227865776 +0000 UTC m=+1036.458211678" observedRunningTime="2026-01-21 07:11:36.196658218 +0000 UTC m=+1037.427004150" watchObservedRunningTime="2026-01-21 07:11:36.203161345 +0000 UTC m=+1037.433507247" Jan 21 07:11:39 crc kubenswrapper[4893]: I0121 07:11:39.607478 4893 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podc758827d-dec6-40c9-b332-2af6d7ef206e"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podc758827d-dec6-40c9-b332-2af6d7ef206e] : Timed out while waiting for systemd to remove kubepods-burstable-podc758827d_dec6_40c9_b332_2af6d7ef206e.slice" Jan 21 07:11:39 crc kubenswrapper[4893]: E0121 07:11:39.607865 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podc758827d-dec6-40c9-b332-2af6d7ef206e] : unable to destroy cgroup paths for cgroup [kubepods burstable podc758827d-dec6-40c9-b332-2af6d7ef206e] : Timed out while waiting for systemd to remove kubepods-burstable-podc758827d_dec6_40c9_b332_2af6d7ef206e.slice" pod="openshift-marketplace/certified-operators-dd6fx" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" Jan 21 07:11:40 crc kubenswrapper[4893]: I0121 07:11:40.200826 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dd6fx" Jan 21 07:11:40 crc kubenswrapper[4893]: I0121 07:11:40.249342 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:11:40 crc kubenswrapper[4893]: I0121 07:11:40.256010 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dd6fx"] Jan 21 07:11:41 crc kubenswrapper[4893]: I0121 07:11:41.588183 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c758827d-dec6-40c9-b332-2af6d7ef206e" path="/var/lib/kubelet/pods/c758827d-dec6-40c9-b332-2af6d7ef206e/volumes" Jan 21 07:11:47 crc kubenswrapper[4893]: I0121 07:11:47.577112 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-856864dc54-jk8lc" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.045622 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-d4c4497c9-rmz6v" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.949800 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb"] Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.950817 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.953611 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4tdpl" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.954040 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.966347 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb"] Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.973796 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-x2lfk"] Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.992362 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.996088 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 07:12:07 crc kubenswrapper[4893]: I0121 07:12:07.996145 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.001832 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhzz\" (UniqueName: \"kubernetes.io/projected/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-kube-api-access-xwhzz\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.001936 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zt5\" (UniqueName: \"kubernetes.io/projected/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-kube-api-access-52zt5\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.001975 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-reloader\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002008 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-conf\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002024 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-sockets\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002078 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002117 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics-certs\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002139 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-startup\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.002156 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.086087 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kq57r"] Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.087449 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.089467 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9fmbb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.090286 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.091130 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.091230 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.096161 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-tktrv"] Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.097963 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.101030 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118298 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhzz\" (UniqueName: \"kubernetes.io/projected/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-kube-api-access-xwhzz\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118429 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zt5\" (UniqueName: \"kubernetes.io/projected/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-kube-api-access-52zt5\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118488 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-cert\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118540 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-reloader\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118579 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-conf\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118668 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-sockets\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkgb\" (UniqueName: \"kubernetes.io/projected/36c52d4d-2838-40a8-a87a-b931b770498a-kube-api-access-2bkgb\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118749 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118847 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics-certs\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118885 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-startup\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.118922 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.124203 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-conf\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.125700 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.125847 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-sockets\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.126318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-reloader\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.126806 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-frr-startup\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.130438 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-metrics-certs\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.143714 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tktrv"] Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.150648 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.168100 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhzz\" (UniqueName: \"kubernetes.io/projected/7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2-kube-api-access-xwhzz\") pod \"frr-k8s-x2lfk\" (UID: \"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2\") " pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.182280 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zt5\" (UniqueName: \"kubernetes.io/projected/d9c4da05-f65a-473f-873c-2cc7fd6c4c53-kube-api-access-52zt5\") pod \"frr-k8s-webhook-server-7df86c4f6c-8hklb\" (UID: \"d9c4da05-f65a-473f-873c-2cc7fd6c4c53\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.224868 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metrics-certs\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.224940 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkft\" (UniqueName: \"kubernetes.io/projected/46f9fdd5-a28f-4966-9100-d15a3d829cd1-kube-api-access-nkkft\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.225012 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.225142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-cert\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.225184 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.225229 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metallb-excludel2\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.225263 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkgb\" (UniqueName: \"kubernetes.io/projected/36c52d4d-2838-40a8-a87a-b931b770498a-kube-api-access-2bkgb\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.225583 4893 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.225778 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs podName:36c52d4d-2838-40a8-a87a-b931b770498a nodeName:}" failed. No retries permitted until 2026-01-21 07:12:08.725746468 +0000 UTC m=+1069.956092370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs") pod "controller-6968d8fdc4-tktrv" (UID: "36c52d4d-2838-40a8-a87a-b931b770498a") : secret "controller-certs-secret" not found Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.229019 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-cert\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.257476 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkgb\" (UniqueName: \"kubernetes.io/projected/36c52d4d-2838-40a8-a87a-b931b770498a-kube-api-access-2bkgb\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.284883 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.312794 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.326728 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metallb-excludel2\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.326790 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metrics-certs\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.326820 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkft\" (UniqueName: \"kubernetes.io/projected/46f9fdd5-a28f-4966-9100-d15a3d829cd1-kube-api-access-nkkft\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.326838 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.326961 4893 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.327010 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist podName:46f9fdd5-a28f-4966-9100-d15a3d829cd1 nodeName:}" failed. No retries permitted until 2026-01-21 07:12:08.826995965 +0000 UTC m=+1070.057341867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist") pod "speaker-kq57r" (UID: "46f9fdd5-a28f-4966-9100-d15a3d829cd1") : secret "metallb-memberlist" not found Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.328059 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metallb-excludel2\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.331456 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-metrics-certs\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.348324 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkft\" (UniqueName: \"kubernetes.io/projected/46f9fdd5-a28f-4966-9100-d15a3d829cd1-kube-api-access-nkkft\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.537694 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"d6557c50e9ad720524efb32f322dc90ab55cf6ef812ddd6465928307f21ff90e"} Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.735013 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.735872 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb"] Jan 21 07:12:08 crc kubenswrapper[4893]: W0121 07:12:08.740970 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c4da05_f65a_473f_873c_2cc7fd6c4c53.slice/crio-0be56b5fea22454f3a0d82de353e5a4894fe7ce20950ccde69dd39505cbb179e WatchSource:0}: Error finding container 0be56b5fea22454f3a0d82de353e5a4894fe7ce20950ccde69dd39505cbb179e: Status 404 returned error can't find the container with id 0be56b5fea22454f3a0d82de353e5a4894fe7ce20950ccde69dd39505cbb179e Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.743040 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c52d4d-2838-40a8-a87a-b931b770498a-metrics-certs\") pod \"controller-6968d8fdc4-tktrv\" (UID: \"36c52d4d-2838-40a8-a87a-b931b770498a\") " pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.796229 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.836545 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.836716 4893 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 07:12:08 crc kubenswrapper[4893]: E0121 07:12:08.836807 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist podName:46f9fdd5-a28f-4966-9100-d15a3d829cd1 nodeName:}" failed. No retries permitted until 2026-01-21 07:12:09.83678719 +0000 UTC m=+1071.067133092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist") pod "speaker-kq57r" (UID: "46f9fdd5-a28f-4966-9100-d15a3d829cd1") : secret "metallb-memberlist" not found Jan 21 07:12:08 crc kubenswrapper[4893]: I0121 07:12:08.998100 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tktrv"] Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.543553 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" event={"ID":"d9c4da05-f65a-473f-873c-2cc7fd6c4c53","Type":"ContainerStarted","Data":"0be56b5fea22454f3a0d82de353e5a4894fe7ce20950ccde69dd39505cbb179e"} Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.545542 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tktrv" event={"ID":"36c52d4d-2838-40a8-a87a-b931b770498a","Type":"ContainerStarted","Data":"a786291a1081de8cd3f858597170e7867099b71ed41a3b99a5209ca71263bca2"} Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.545588 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tktrv" event={"ID":"36c52d4d-2838-40a8-a87a-b931b770498a","Type":"ContainerStarted","Data":"509274875e126a541f7666cdf4eba1efba9683436ac33f7f7bd27f2cfc8d0e6b"} Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.853287 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.860555 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46f9fdd5-a28f-4966-9100-d15a3d829cd1-memberlist\") pod \"speaker-kq57r\" (UID: \"46f9fdd5-a28f-4966-9100-d15a3d829cd1\") " pod="metallb-system/speaker-kq57r" Jan 21 07:12:09 crc kubenswrapper[4893]: I0121 07:12:09.902977 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kq57r" Jan 21 07:12:09 crc kubenswrapper[4893]: W0121 07:12:09.943726 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f9fdd5_a28f_4966_9100_d15a3d829cd1.slice/crio-9fb2104ab6b232e079fcd2539eb679d3516a5bd30aa044757402a5b636e50dff WatchSource:0}: Error finding container 9fb2104ab6b232e079fcd2539eb679d3516a5bd30aa044757402a5b636e50dff: Status 404 returned error can't find the container with id 9fb2104ab6b232e079fcd2539eb679d3516a5bd30aa044757402a5b636e50dff Jan 21 07:12:10 crc kubenswrapper[4893]: I0121 07:12:10.553587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq57r" event={"ID":"46f9fdd5-a28f-4966-9100-d15a3d829cd1","Type":"ContainerStarted","Data":"2089160f6ad207263898c47bd0b5d7b53f67250e14c9df164097ece5279e8620"} Jan 21 07:12:10 crc kubenswrapper[4893]: I0121 07:12:10.553959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq57r" event={"ID":"46f9fdd5-a28f-4966-9100-d15a3d829cd1","Type":"ContainerStarted","Data":"9fb2104ab6b232e079fcd2539eb679d3516a5bd30aa044757402a5b636e50dff"} Jan 21 07:12:10 crc kubenswrapper[4893]: I0121 07:12:10.555365 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tktrv" event={"ID":"36c52d4d-2838-40a8-a87a-b931b770498a","Type":"ContainerStarted","Data":"867f901cc6ed265af6b22ae1242cd2c467ad6dd0aab875eaac0f323c4f1af467"} Jan 21 07:12:10 crc kubenswrapper[4893]: I0121 07:12:10.556539 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:11 crc kubenswrapper[4893]: I0121 07:12:11.564211 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq57r" event={"ID":"46f9fdd5-a28f-4966-9100-d15a3d829cd1","Type":"ContainerStarted","Data":"e188cc113805cb10b4971fa35b3859c72a9467f0e0656e7d3a772ef6cc35f3d0"} Jan 21 07:12:11 crc kubenswrapper[4893]: I0121 07:12:11.564328 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kq57r" Jan 21 07:12:11 crc kubenswrapper[4893]: I0121 07:12:11.592708 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kq57r" podStartSLOduration=3.592656297 podStartE2EDuration="3.592656297s" podCreationTimestamp="2026-01-21 07:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:12:11.590457754 +0000 UTC m=+1072.820803656" watchObservedRunningTime="2026-01-21 07:12:11.592656297 +0000 UTC m=+1072.823002199" Jan 21 07:12:11 crc kubenswrapper[4893]: I0121 07:12:11.593734 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-tktrv" podStartSLOduration=3.5937288179999998 podStartE2EDuration="3.593728818s" podCreationTimestamp="2026-01-21 07:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:12:10.588982164 +0000 UTC m=+1071.819328066" watchObservedRunningTime="2026-01-21 07:12:11.593728818 +0000 UTC m=+1072.824074720" Jan 21 07:12:19 crc kubenswrapper[4893]: I0121 07:12:19.628173 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" event={"ID":"d9c4da05-f65a-473f-873c-2cc7fd6c4c53","Type":"ContainerStarted","Data":"ee2ab0d3f59244f591b93174005da0e421f438a2045833f8afe42562cc7f06b5"} Jan 21 07:12:19 crc kubenswrapper[4893]: I0121 07:12:19.629153 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:19 crc kubenswrapper[4893]: I0121 07:12:19.630732 4893 generic.go:334] "Generic (PLEG): container finished" podID="7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2" containerID="d7ff78fbe1bfcdabafb21bb1866bcbba2b3e9b45c6aa8e9a61f05c2a8b6deb15" exitCode=0 Jan 21 07:12:19 crc kubenswrapper[4893]: I0121 07:12:19.630766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerDied","Data":"d7ff78fbe1bfcdabafb21bb1866bcbba2b3e9b45c6aa8e9a61f05c2a8b6deb15"} Jan 21 07:12:19 crc kubenswrapper[4893]: I0121 07:12:19.764043 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" podStartSLOduration=2.658701127 podStartE2EDuration="12.764017316s" podCreationTimestamp="2026-01-21 07:12:07 +0000 UTC" firstStartedPulling="2026-01-21 07:12:08.744067919 +0000 UTC m=+1069.974413821" lastFinishedPulling="2026-01-21 07:12:18.849384108 +0000 UTC m=+1080.079730010" observedRunningTime="2026-01-21 07:12:19.762600475 +0000 UTC m=+1080.992946387" watchObservedRunningTime="2026-01-21 07:12:19.764017316 +0000 UTC m=+1080.994363228" Jan 21 07:12:20 crc kubenswrapper[4893]: I0121 07:12:20.639664 4893 generic.go:334] "Generic (PLEG): container finished" podID="7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2" containerID="bf1c740bfa0f3756a2e7088b0fb1fcca2559551fe9f7158644f4d192bf73f173" exitCode=0 Jan 21 07:12:20 crc kubenswrapper[4893]: I0121 07:12:20.639810 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerDied","Data":"bf1c740bfa0f3756a2e7088b0fb1fcca2559551fe9f7158644f4d192bf73f173"} Jan 21 07:12:21 crc kubenswrapper[4893]: I0121 07:12:21.651733 4893 generic.go:334] "Generic (PLEG): container finished" podID="7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2" containerID="53ef1968e1b3b03185b314012d859051797f897837687e91f751e04e06661eae" exitCode=0 Jan 21 07:12:21 crc kubenswrapper[4893]: I0121 07:12:21.651987 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerDied","Data":"53ef1968e1b3b03185b314012d859051797f897837687e91f751e04e06661eae"} Jan 21 07:12:22 crc kubenswrapper[4893]: I0121 07:12:22.685379 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"d830e042170250c3b113ddc88d60fa060b444f3c5ad4df7e2b4a61e27aef80be"} Jan 21 07:12:22 crc kubenswrapper[4893]: I0121 07:12:22.685438 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"b6693c79752917029e5e4ca20db42cb388e3611a9223f8c6936d81de681c377b"} Jan 21 07:12:22 crc kubenswrapper[4893]: I0121 07:12:22.685452 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"2f7a7c41af6c5d281edfa4bd03b80eca757709c33aaadee971cc0c922239c632"} Jan 21 07:12:22 crc kubenswrapper[4893]: I0121 07:12:22.685463 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"e22b1fdfcd28cd6c338786ed2bfddf3910cc827085e2d5e435268a59e47da551"} Jan 21 07:12:22 crc kubenswrapper[4893]: I0121 07:12:22.685474 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"729d350e767f54d5a770cf0faec25288bd0eda780b498e878831e11bd386e294"} Jan 21 07:12:23 crc kubenswrapper[4893]: I0121 07:12:23.976889 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-x2lfk" event={"ID":"7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2","Type":"ContainerStarted","Data":"61ab2a98d31cfcdc7f1f2406a82ff8a24939bcd76da3bfe19c7d86d60350d3ed"} Jan 21 07:12:23 crc kubenswrapper[4893]: I0121 07:12:23.977651 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:24 crc kubenswrapper[4893]: I0121 07:12:24.002800 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-x2lfk" podStartSLOduration=6.584507211 podStartE2EDuration="17.002783755s" podCreationTimestamp="2026-01-21 07:12:07 +0000 UTC" firstStartedPulling="2026-01-21 07:12:08.454233048 +0000 UTC m=+1069.684578950" lastFinishedPulling="2026-01-21 07:12:18.872509592 +0000 UTC m=+1080.102855494" observedRunningTime="2026-01-21 07:12:24.002423865 +0000 UTC m=+1085.232769767" watchObservedRunningTime="2026-01-21 07:12:24.002783755 +0000 UTC m=+1085.233129657" Jan 21 07:12:28 crc kubenswrapper[4893]: I0121 07:12:28.315301 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:28 crc kubenswrapper[4893]: I0121 07:12:28.366549 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:28 crc kubenswrapper[4893]: I0121 07:12:28.803928 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-tktrv" Jan 21 07:12:29 crc kubenswrapper[4893]: I0121 07:12:29.907431 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kq57r" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.567970 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb"] Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.569743 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.574052 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.595381 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb"] Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.681170 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.681238 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgw5t\" (UniqueName: \"kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.681326 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.783017 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.783358 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgw5t\" (UniqueName: \"kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.783515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.784029 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.784113 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.805117 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgw5t\" (UniqueName: \"kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:31 crc kubenswrapper[4893]: I0121 07:12:31.896787 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:32 crc kubenswrapper[4893]: I0121 07:12:32.512194 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb"] Jan 21 07:12:33 crc kubenswrapper[4893]: I0121 07:12:33.046746 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" event={"ID":"66507ef1-092c-4201-a33f-bbf8851600e3","Type":"ContainerStarted","Data":"3f1a46ca79e3dc4bfd1164121479a7a993a090a9c9c40f04aaeb72fba6efb208"} Jan 21 07:12:34 crc kubenswrapper[4893]: I0121 07:12:34.055905 4893 generic.go:334] "Generic (PLEG): container finished" podID="66507ef1-092c-4201-a33f-bbf8851600e3" containerID="1018a2bbeb2eb06eafa5539c75833145a236f674d8368bb93d5c5303070c88ab" exitCode=0 Jan 21 07:12:34 crc kubenswrapper[4893]: I0121 07:12:34.056020 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" event={"ID":"66507ef1-092c-4201-a33f-bbf8851600e3","Type":"ContainerDied","Data":"1018a2bbeb2eb06eafa5539c75833145a236f674d8368bb93d5c5303070c88ab"} Jan 21 07:12:38 crc kubenswrapper[4893]: I0121 07:12:38.325498 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-x2lfk" Jan 21 07:12:38 crc kubenswrapper[4893]: I0121 07:12:38.328366 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8hklb" Jan 21 07:12:40 crc kubenswrapper[4893]: I0121 07:12:40.345475 4893 generic.go:334] "Generic (PLEG): container finished" podID="66507ef1-092c-4201-a33f-bbf8851600e3" containerID="2fcc85d8543bd57453e6f565a1397b50a131e3a3aab6fbe32b54da8878a9dd9c" exitCode=0 Jan 21 07:12:40 crc kubenswrapper[4893]: I0121 07:12:40.345586 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" event={"ID":"66507ef1-092c-4201-a33f-bbf8851600e3","Type":"ContainerDied","Data":"2fcc85d8543bd57453e6f565a1397b50a131e3a3aab6fbe32b54da8878a9dd9c"} Jan 21 07:12:41 crc kubenswrapper[4893]: I0121 07:12:41.357003 4893 generic.go:334] "Generic (PLEG): container finished" podID="66507ef1-092c-4201-a33f-bbf8851600e3" containerID="ea47295fac900a9d1e2ceb732eef656cbfd2d9cfaf1d81e3db966eaa23c0d1fd" exitCode=0 Jan 21 07:12:41 crc kubenswrapper[4893]: I0121 07:12:41.357141 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" event={"ID":"66507ef1-092c-4201-a33f-bbf8851600e3","Type":"ContainerDied","Data":"ea47295fac900a9d1e2ceb732eef656cbfd2d9cfaf1d81e3db966eaa23c0d1fd"} Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.601414 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.614295 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgw5t\" (UniqueName: \"kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t\") pod \"66507ef1-092c-4201-a33f-bbf8851600e3\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.614496 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util\") pod \"66507ef1-092c-4201-a33f-bbf8851600e3\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.614547 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle\") pod \"66507ef1-092c-4201-a33f-bbf8851600e3\" (UID: \"66507ef1-092c-4201-a33f-bbf8851600e3\") " Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.618103 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle" (OuterVolumeSpecName: "bundle") pod "66507ef1-092c-4201-a33f-bbf8851600e3" (UID: "66507ef1-092c-4201-a33f-bbf8851600e3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.621522 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t" (OuterVolumeSpecName: "kube-api-access-wgw5t") pod "66507ef1-092c-4201-a33f-bbf8851600e3" (UID: "66507ef1-092c-4201-a33f-bbf8851600e3"). InnerVolumeSpecName "kube-api-access-wgw5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.630327 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util" (OuterVolumeSpecName: "util") pod "66507ef1-092c-4201-a33f-bbf8851600e3" (UID: "66507ef1-092c-4201-a33f-bbf8851600e3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.716035 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-util\") on node \"crc\" DevicePath \"\"" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.716084 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66507ef1-092c-4201-a33f-bbf8851600e3-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:12:42 crc kubenswrapper[4893]: I0121 07:12:42.716097 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgw5t\" (UniqueName: \"kubernetes.io/projected/66507ef1-092c-4201-a33f-bbf8851600e3-kube-api-access-wgw5t\") on node \"crc\" DevicePath \"\"" Jan 21 07:12:43 crc kubenswrapper[4893]: I0121 07:12:43.370623 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" event={"ID":"66507ef1-092c-4201-a33f-bbf8851600e3","Type":"ContainerDied","Data":"3f1a46ca79e3dc4bfd1164121479a7a993a090a9c9c40f04aaeb72fba6efb208"} Jan 21 07:12:43 crc kubenswrapper[4893]: I0121 07:12:43.370711 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1a46ca79e3dc4bfd1164121479a7a993a090a9c9c40f04aaeb72fba6efb208" Jan 21 07:12:43 crc kubenswrapper[4893]: I0121 07:12:43.370782 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.338903 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r"] Jan 21 07:12:49 crc kubenswrapper[4893]: E0121 07:12:49.339685 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="util" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.339699 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="util" Jan 21 07:12:49 crc kubenswrapper[4893]: E0121 07:12:49.339718 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="extract" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.339724 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="extract" Jan 21 07:12:49 crc kubenswrapper[4893]: E0121 07:12:49.339735 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="pull" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.339741 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="pull" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.339858 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="66507ef1-092c-4201-a33f-bbf8851600e3" containerName="extract" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.340369 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.342426 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.343116 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.343518 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-dvmds" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.360220 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r"] Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.511110 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-467xq\" (UniqueName: \"kubernetes.io/projected/ca95b613-2538-47e7-a390-88333c50a0a0-kube-api-access-467xq\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.511442 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ca95b613-2538-47e7-a390-88333c50a0a0-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.613403 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-467xq\" (UniqueName: \"kubernetes.io/projected/ca95b613-2538-47e7-a390-88333c50a0a0-kube-api-access-467xq\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.613465 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ca95b613-2538-47e7-a390-88333c50a0a0-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.701447 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ca95b613-2538-47e7-a390-88333c50a0a0-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.723657 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-467xq\" (UniqueName: \"kubernetes.io/projected/ca95b613-2538-47e7-a390-88333c50a0a0-kube-api-access-467xq\") pod \"cert-manager-operator-controller-manager-64cf6dff88-2h46r\" (UID: \"ca95b613-2538-47e7-a390-88333c50a0a0\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:49 crc kubenswrapper[4893]: I0121 07:12:49.958085 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" Jan 21 07:12:50 crc kubenswrapper[4893]: I0121 07:12:50.582661 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r"] Jan 21 07:12:51 crc kubenswrapper[4893]: I0121 07:12:51.528571 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" event={"ID":"ca95b613-2538-47e7-a390-88333c50a0a0","Type":"ContainerStarted","Data":"24dbf408d8a7701ad92998f24c2dec013c30f80eb319b8563f5ffa457f972f65"} Jan 21 07:13:04 crc kubenswrapper[4893]: I0121 07:13:04.910756 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" event={"ID":"ca95b613-2538-47e7-a390-88333c50a0a0","Type":"ContainerStarted","Data":"3b5b37abfa03b91f617efd6afc665838a627df9e9086556929aad157e2d5d8cc"} Jan 21 07:13:04 crc kubenswrapper[4893]: I0121 07:13:04.939922 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-2h46r" podStartSLOduration=2.553449447 podStartE2EDuration="15.939854294s" podCreationTimestamp="2026-01-21 07:12:49 +0000 UTC" firstStartedPulling="2026-01-21 07:12:50.590769912 +0000 UTC m=+1111.821115804" lastFinishedPulling="2026-01-21 07:13:03.977174739 +0000 UTC m=+1125.207520651" observedRunningTime="2026-01-21 07:13:04.933548483 +0000 UTC m=+1126.163894395" watchObservedRunningTime="2026-01-21 07:13:04.939854294 +0000 UTC m=+1126.170200256" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.693248 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5"] Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.694609 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.696864 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.698282 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fzzfc" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.701392 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5"] Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.703184 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.781543 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.781656 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7x5\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-kube-api-access-dj7x5\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.883720 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.883806 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7x5\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-kube-api-access-dj7x5\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.908946 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7x5\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-kube-api-access-dj7x5\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:10 crc kubenswrapper[4893]: I0121 07:13:10.917599 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e801f1e-3a74-45d5-9f8c-5fee35cc9fac-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fqpm5\" (UID: \"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:11 crc kubenswrapper[4893]: I0121 07:13:11.023761 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" Jan 21 07:13:12 crc kubenswrapper[4893]: I0121 07:13:12.091071 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5"] Jan 21 07:13:13 crc kubenswrapper[4893]: I0121 07:13:13.099373 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" event={"ID":"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac","Type":"ContainerStarted","Data":"19082cb194076a1d5c5b450b806fec632481a49ad0779fb20065bb3103efd7ed"} Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.725439 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6szpd"] Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.727307 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.729455 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-th7fv" Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.744172 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6szpd"] Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.846574 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsbrj\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-kube-api-access-rsbrj\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.847264 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.949188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsbrj\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-kube-api-access-rsbrj\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:14 crc kubenswrapper[4893]: I0121 07:13:14.949268 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:15 crc kubenswrapper[4893]: I0121 07:13:15.001784 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:15 crc kubenswrapper[4893]: I0121 07:13:15.002064 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsbrj\" (UniqueName: \"kubernetes.io/projected/4a781d80-f82d-4d7b-8974-b3cda4d98186-kube-api-access-rsbrj\") pod \"cert-manager-webhook-f4fb5df64-6szpd\" (UID: \"4a781d80-f82d-4d7b-8974-b3cda4d98186\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:15 crc kubenswrapper[4893]: I0121 07:13:15.052193 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:15 crc kubenswrapper[4893]: I0121 07:13:15.448600 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6szpd"] Jan 21 07:13:16 crc kubenswrapper[4893]: I0121 07:13:16.149463 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" event={"ID":"4a781d80-f82d-4d7b-8974-b3cda4d98186","Type":"ContainerStarted","Data":"bec973708392360a3239e91e371e6f4ea7a25a5cfde32f434e4212058e749520"} Jan 21 07:13:23 crc kubenswrapper[4893]: I0121 07:13:23.278116 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" event={"ID":"4a781d80-f82d-4d7b-8974-b3cda4d98186","Type":"ContainerStarted","Data":"9bfd01344ed2fd28cf925faf2546018964c2a4a4eaea4753cd66c9d241d4ec99"} Jan 21 07:13:23 crc kubenswrapper[4893]: I0121 07:13:23.280520 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" event={"ID":"7e801f1e-3a74-45d5-9f8c-5fee35cc9fac","Type":"ContainerStarted","Data":"c4575faabdd9fcc2db59e798d81d674094a705c767559a3b6bcbd5de42abd179"} Jan 21 07:13:23 crc kubenswrapper[4893]: I0121 07:13:23.301371 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" podStartSLOduration=1.808323659 podStartE2EDuration="9.301343208s" podCreationTimestamp="2026-01-21 07:13:14 +0000 UTC" firstStartedPulling="2026-01-21 07:13:15.461112105 +0000 UTC m=+1136.691458007" lastFinishedPulling="2026-01-21 07:13:22.954131644 +0000 UTC m=+1144.184477556" observedRunningTime="2026-01-21 07:13:23.299439473 +0000 UTC m=+1144.529785385" watchObservedRunningTime="2026-01-21 07:13:23.301343208 +0000 UTC m=+1144.531689130" Jan 21 07:13:23 crc kubenswrapper[4893]: I0121 07:13:23.321987 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fqpm5" podStartSLOduration=2.475771495 podStartE2EDuration="13.32196459s" podCreationTimestamp="2026-01-21 07:13:10 +0000 UTC" firstStartedPulling="2026-01-21 07:13:12.105810388 +0000 UTC m=+1133.336156290" lastFinishedPulling="2026-01-21 07:13:22.952003473 +0000 UTC m=+1144.182349385" observedRunningTime="2026-01-21 07:13:23.320520129 +0000 UTC m=+1144.550866031" watchObservedRunningTime="2026-01-21 07:13:23.32196459 +0000 UTC m=+1144.552310492" Jan 21 07:13:24 crc kubenswrapper[4893]: I0121 07:13:24.286245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.450251 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9bp8c"] Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.451466 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.453913 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hl2rd" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.460508 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9bp8c"] Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.531983 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.532065 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28krx\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-kube-api-access-28krx\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.633850 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.633947 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28krx\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-kube-api-access-28krx\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.662820 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.663036 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28krx\" (UniqueName: \"kubernetes.io/projected/1590bf32-4ee6-47a5-baac-14c054272f8e-kube-api-access-28krx\") pod \"cert-manager-86cb77c54b-9bp8c\" (UID: \"1590bf32-4ee6-47a5-baac-14c054272f8e\") " pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:27 crc kubenswrapper[4893]: I0121 07:13:27.771524 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9bp8c" Jan 21 07:13:28 crc kubenswrapper[4893]: I0121 07:13:28.207549 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9bp8c"] Jan 21 07:13:28 crc kubenswrapper[4893]: W0121 07:13:28.219290 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1590bf32_4ee6_47a5_baac_14c054272f8e.slice/crio-2af7b02e012de76dd427eea27971ba6bafb4de280d56251057c37312e473a372 WatchSource:0}: Error finding container 2af7b02e012de76dd427eea27971ba6bafb4de280d56251057c37312e473a372: Status 404 returned error can't find the container with id 2af7b02e012de76dd427eea27971ba6bafb4de280d56251057c37312e473a372 Jan 21 07:13:28 crc kubenswrapper[4893]: I0121 07:13:28.327020 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9bp8c" event={"ID":"1590bf32-4ee6-47a5-baac-14c054272f8e","Type":"ContainerStarted","Data":"2af7b02e012de76dd427eea27971ba6bafb4de280d56251057c37312e473a372"} Jan 21 07:13:29 crc kubenswrapper[4893]: I0121 07:13:29.335918 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9bp8c" event={"ID":"1590bf32-4ee6-47a5-baac-14c054272f8e","Type":"ContainerStarted","Data":"90059ef9d95f08e823f5c982661152bab993b8fa561c634adcb8bba336bb915d"} Jan 21 07:13:29 crc kubenswrapper[4893]: I0121 07:13:29.361640 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-9bp8c" podStartSLOduration=2.36159602 podStartE2EDuration="2.36159602s" podCreationTimestamp="2026-01-21 07:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:13:29.355740662 +0000 UTC m=+1150.586086584" watchObservedRunningTime="2026-01-21 07:13:29.36159602 +0000 UTC m=+1150.591941922" Jan 21 07:13:30 crc kubenswrapper[4893]: I0121 07:13:30.057454 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-6szpd" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.812335 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.813755 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.816554 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hrp8q" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.816959 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.817191 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.821412 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.846993 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dz22\" (UniqueName: \"kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22\") pod \"openstack-operator-index-mdxcj\" (UID: \"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd\") " pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.949053 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dz22\" (UniqueName: \"kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22\") pod \"openstack-operator-index-mdxcj\" (UID: \"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd\") " pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:34 crc kubenswrapper[4893]: I0121 07:13:34.969513 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dz22\" (UniqueName: \"kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22\") pod \"openstack-operator-index-mdxcj\" (UID: \"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd\") " pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:35 crc kubenswrapper[4893]: I0121 07:13:35.173254 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:36 crc kubenswrapper[4893]: I0121 07:13:36.433443 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:37 crc kubenswrapper[4893]: I0121 07:13:37.596441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdxcj" event={"ID":"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd","Type":"ContainerStarted","Data":"52d361e1b20611041db20d5e966ccab9ee65f80df9dc508a45ce2eec2e49c4f0"} Jan 21 07:13:39 crc kubenswrapper[4893]: I0121 07:13:39.596755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdxcj" event={"ID":"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd","Type":"ContainerStarted","Data":"6d2f5a0c58180455b953111be0802709cb420d5e43b0edcdc6486d3c128cadea"} Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.173721 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.174025 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.212642 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.254349 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mdxcj" podStartSLOduration=8.983512337 podStartE2EDuration="11.254309959s" podCreationTimestamp="2026-01-21 07:13:34 +0000 UTC" firstStartedPulling="2026-01-21 07:13:36.448807042 +0000 UTC m=+1157.679152954" lastFinishedPulling="2026-01-21 07:13:38.719604674 +0000 UTC m=+1159.949950576" observedRunningTime="2026-01-21 07:13:39.627763973 +0000 UTC m=+1160.858109875" watchObservedRunningTime="2026-01-21 07:13:45.254309959 +0000 UTC m=+1166.484655901" Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.622234 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:45 crc kubenswrapper[4893]: I0121 07:13:45.680416 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.022810 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mwkv7"] Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.024048 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.037463 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mwkv7"] Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.123070 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjc6k\" (UniqueName: \"kubernetes.io/projected/9cd4b07c-856e-42d0-8a00-7ecf01b01924-kube-api-access-kjc6k\") pod \"openstack-operator-index-mwkv7\" (UID: \"9cd4b07c-856e-42d0-8a00-7ecf01b01924\") " pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.224262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjc6k\" (UniqueName: \"kubernetes.io/projected/9cd4b07c-856e-42d0-8a00-7ecf01b01924-kube-api-access-kjc6k\") pod \"openstack-operator-index-mwkv7\" (UID: \"9cd4b07c-856e-42d0-8a00-7ecf01b01924\") " pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.253025 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjc6k\" (UniqueName: \"kubernetes.io/projected/9cd4b07c-856e-42d0-8a00-7ecf01b01924-kube-api-access-kjc6k\") pod \"openstack-operator-index-mwkv7\" (UID: \"9cd4b07c-856e-42d0-8a00-7ecf01b01924\") " pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.396981 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.655443 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mdxcj" podUID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" containerName="registry-server" containerID="cri-o://6d2f5a0c58180455b953111be0802709cb420d5e43b0edcdc6486d3c128cadea" gracePeriod=2 Jan 21 07:13:46 crc kubenswrapper[4893]: I0121 07:13:46.851006 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mwkv7"] Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.664704 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mwkv7" event={"ID":"9cd4b07c-856e-42d0-8a00-7ecf01b01924","Type":"ContainerStarted","Data":"695aba120ef9e1666beb105ae9adee6ae88aac9399b8661249306501e85dedf2"} Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.667055 4893 generic.go:334] "Generic (PLEG): container finished" podID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" containerID="6d2f5a0c58180455b953111be0802709cb420d5e43b0edcdc6486d3c128cadea" exitCode=0 Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.667094 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdxcj" event={"ID":"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd","Type":"ContainerDied","Data":"6d2f5a0c58180455b953111be0802709cb420d5e43b0edcdc6486d3c128cadea"} Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.865660 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6"] Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.866839 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.869693 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wpsx6" Jan 21 07:13:47 crc kubenswrapper[4893]: I0121 07:13:47.880970 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6"] Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.051426 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp82z\" (UniqueName: \"kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.052298 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.052369 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.116430 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.153834 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.153928 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.153966 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp82z\" (UniqueName: \"kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.154848 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.155097 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.173821 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp82z\" (UniqueName: \"kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.184558 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.255173 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dz22\" (UniqueName: \"kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22\") pod \"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd\" (UID: \"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd\") " Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.259493 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22" (OuterVolumeSpecName: "kube-api-access-9dz22") pod "3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" (UID: "3b96e0d3-b674-46a1-aaa9-22c7ae26edbd"). InnerVolumeSpecName "kube-api-access-9dz22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.356034 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dz22\" (UniqueName: \"kubernetes.io/projected/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd-kube-api-access-9dz22\") on node \"crc\" DevicePath \"\"" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.433749 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6"] Jan 21 07:13:48 crc kubenswrapper[4893]: W0121 07:13:48.438362 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88357780_da4e_4ab0_810d_3271b6f37bfc.slice/crio-8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e WatchSource:0}: Error finding container 8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e: Status 404 returned error can't find the container with id 8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.678906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdxcj" event={"ID":"3b96e0d3-b674-46a1-aaa9-22c7ae26edbd","Type":"ContainerDied","Data":"52d361e1b20611041db20d5e966ccab9ee65f80df9dc508a45ce2eec2e49c4f0"} Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.678924 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdxcj" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.679377 4893 scope.go:117] "RemoveContainer" containerID="6d2f5a0c58180455b953111be0802709cb420d5e43b0edcdc6486d3c128cadea" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.682417 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mwkv7" event={"ID":"9cd4b07c-856e-42d0-8a00-7ecf01b01924","Type":"ContainerStarted","Data":"a685674ea677ba983490361984b27626ce0078d57a2ee316d3a7645f41aa2068"} Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.685727 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerStarted","Data":"f5718b09e42f8bf32ed4ed24ca6086f0b48b511c291b91338b3e34d4511e9100"} Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.685781 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerStarted","Data":"8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e"} Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.707366 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mwkv7" podStartSLOduration=2.016735524 podStartE2EDuration="2.707337652s" podCreationTimestamp="2026-01-21 07:13:46 +0000 UTC" firstStartedPulling="2026-01-21 07:13:46.859910094 +0000 UTC m=+1168.090256026" lastFinishedPulling="2026-01-21 07:13:47.550512212 +0000 UTC m=+1168.780858154" observedRunningTime="2026-01-21 07:13:48.701934908 +0000 UTC m=+1169.932280810" watchObservedRunningTime="2026-01-21 07:13:48.707337652 +0000 UTC m=+1169.937683554" Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.758827 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:48 crc kubenswrapper[4893]: I0121 07:13:48.768198 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mdxcj"] Jan 21 07:13:49 crc kubenswrapper[4893]: I0121 07:13:49.598544 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" path="/var/lib/kubelet/pods/3b96e0d3-b674-46a1-aaa9-22c7ae26edbd/volumes" Jan 21 07:13:49 crc kubenswrapper[4893]: I0121 07:13:49.694938 4893 generic.go:334] "Generic (PLEG): container finished" podID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerID="f5718b09e42f8bf32ed4ed24ca6086f0b48b511c291b91338b3e34d4511e9100" exitCode=0 Jan 21 07:13:49 crc kubenswrapper[4893]: I0121 07:13:49.694982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerDied","Data":"f5718b09e42f8bf32ed4ed24ca6086f0b48b511c291b91338b3e34d4511e9100"} Jan 21 07:13:50 crc kubenswrapper[4893]: I0121 07:13:50.706883 4893 generic.go:334] "Generic (PLEG): container finished" podID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerID="95f444d3838964d05ffbb86dfe54e51b2a8c209fa6672eedde54d6a96967bb1d" exitCode=0 Jan 21 07:13:50 crc kubenswrapper[4893]: I0121 07:13:50.706941 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerDied","Data":"95f444d3838964d05ffbb86dfe54e51b2a8c209fa6672eedde54d6a96967bb1d"} Jan 21 07:13:51 crc kubenswrapper[4893]: I0121 07:13:51.718352 4893 generic.go:334] "Generic (PLEG): container finished" podID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerID="abcd2787fc4dafd029e2d59717c8f5c170630deed77a4ce1515ae0b50853b519" exitCode=0 Jan 21 07:13:51 crc kubenswrapper[4893]: I0121 07:13:51.718477 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerDied","Data":"abcd2787fc4dafd029e2d59717c8f5c170630deed77a4ce1515ae0b50853b519"} Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.109792 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.238048 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util\") pod \"88357780-da4e-4ab0-810d-3271b6f37bfc\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.238124 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp82z\" (UniqueName: \"kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z\") pod \"88357780-da4e-4ab0-810d-3271b6f37bfc\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.238151 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle\") pod \"88357780-da4e-4ab0-810d-3271b6f37bfc\" (UID: \"88357780-da4e-4ab0-810d-3271b6f37bfc\") " Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.239488 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle" (OuterVolumeSpecName: "bundle") pod "88357780-da4e-4ab0-810d-3271b6f37bfc" (UID: "88357780-da4e-4ab0-810d-3271b6f37bfc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.246541 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z" (OuterVolumeSpecName: "kube-api-access-jp82z") pod "88357780-da4e-4ab0-810d-3271b6f37bfc" (UID: "88357780-da4e-4ab0-810d-3271b6f37bfc"). InnerVolumeSpecName "kube-api-access-jp82z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.270626 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util" (OuterVolumeSpecName: "util") pod "88357780-da4e-4ab0-810d-3271b6f37bfc" (UID: "88357780-da4e-4ab0-810d-3271b6f37bfc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.339732 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-util\") on node \"crc\" DevicePath \"\"" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.339792 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp82z\" (UniqueName: \"kubernetes.io/projected/88357780-da4e-4ab0-810d-3271b6f37bfc-kube-api-access-jp82z\") on node \"crc\" DevicePath \"\"" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.339816 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88357780-da4e-4ab0-810d-3271b6f37bfc-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.776550 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" event={"ID":"88357780-da4e-4ab0-810d-3271b6f37bfc","Type":"ContainerDied","Data":"8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e"} Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.776615 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d953d79b894ee1af68a73b49324a2091ea16937dc98b2f86c52debe9d45034e" Jan 21 07:13:53 crc kubenswrapper[4893]: I0121 07:13:53.776634 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.375027 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5"] Jan 21 07:13:55 crc kubenswrapper[4893]: E0121 07:13:55.376597 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" containerName="registry-server" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.376764 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" containerName="registry-server" Jan 21 07:13:55 crc kubenswrapper[4893]: E0121 07:13:55.376864 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="pull" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.376944 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="pull" Jan 21 07:13:55 crc kubenswrapper[4893]: E0121 07:13:55.377036 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="util" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.377107 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="util" Jan 21 07:13:55 crc kubenswrapper[4893]: E0121 07:13:55.377188 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="extract" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.377250 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="extract" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.377453 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="88357780-da4e-4ab0-810d-3271b6f37bfc" containerName="extract" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.377570 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b96e0d3-b674-46a1-aaa9-22c7ae26edbd" containerName="registry-server" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.378260 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.380614 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-hmwnj" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.406228 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5"] Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.482347 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgj97\" (UniqueName: \"kubernetes.io/projected/87317749-c103-4670-b65e-e7fea5002024-kube-api-access-xgj97\") pod \"openstack-operator-controller-init-6d4d7d8545-fx4n5\" (UID: \"87317749-c103-4670-b65e-e7fea5002024\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.583973 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgj97\" (UniqueName: \"kubernetes.io/projected/87317749-c103-4670-b65e-e7fea5002024-kube-api-access-xgj97\") pod \"openstack-operator-controller-init-6d4d7d8545-fx4n5\" (UID: \"87317749-c103-4670-b65e-e7fea5002024\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.601140 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgj97\" (UniqueName: \"kubernetes.io/projected/87317749-c103-4670-b65e-e7fea5002024-kube-api-access-xgj97\") pod \"openstack-operator-controller-init-6d4d7d8545-fx4n5\" (UID: \"87317749-c103-4670-b65e-e7fea5002024\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:13:55 crc kubenswrapper[4893]: I0121 07:13:55.695933 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.007928 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5"] Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.398275 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.398350 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.435511 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.806426 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" event={"ID":"87317749-c103-4670-b65e-e7fea5002024","Type":"ContainerStarted","Data":"c5d4a9ffde9e123faa1c0fb7ddca2b61b079a0050b784cd9d5cfe0aa85a77957"} Jan 21 07:13:56 crc kubenswrapper[4893]: I0121 07:13:56.954359 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mwkv7" Jan 21 07:13:58 crc kubenswrapper[4893]: I0121 07:13:58.656662 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:13:58 crc kubenswrapper[4893]: I0121 07:13:58.656807 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:14:02 crc kubenswrapper[4893]: I0121 07:14:02.852770 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" event={"ID":"87317749-c103-4670-b65e-e7fea5002024","Type":"ContainerStarted","Data":"f809713074f4de3a2dec39777c64fa3287c31a3046c56d34b2f73e28dae9cea7"} Jan 21 07:14:02 crc kubenswrapper[4893]: I0121 07:14:02.853331 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:14:02 crc kubenswrapper[4893]: I0121 07:14:02.895498 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" podStartSLOduration=1.3786249179999999 podStartE2EDuration="7.89545684s" podCreationTimestamp="2026-01-21 07:13:55 +0000 UTC" firstStartedPulling="2026-01-21 07:13:56.017574119 +0000 UTC m=+1177.247920021" lastFinishedPulling="2026-01-21 07:14:02.534406041 +0000 UTC m=+1183.764751943" observedRunningTime="2026-01-21 07:14:02.888469851 +0000 UTC m=+1184.118815783" watchObservedRunningTime="2026-01-21 07:14:02.89545684 +0000 UTC m=+1184.125802742" Jan 21 07:14:15 crc kubenswrapper[4893]: I0121 07:14:15.699902 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-fx4n5" Jan 21 07:14:28 crc kubenswrapper[4893]: I0121 07:14:28.678919 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:14:28 crc kubenswrapper[4893]: I0121 07:14:28.679449 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.476296 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.478200 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.485809 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.486836 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.493455 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.495306 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r8q89" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.495306 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jbj69" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.499219 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm47\" (UniqueName: \"kubernetes.io/projected/a7d9e99c-b2eb-481e-be87-a69b88b6609e-kube-api-access-fhm47\") pod \"cinder-operator-controller-manager-9b68f5989-b5mdw\" (UID: \"a7d9e99c-b2eb-481e-be87-a69b88b6609e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.499300 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22f46\" (UniqueName: \"kubernetes.io/projected/ec3cd342-ebee-4689-a339-72ca3fd65506-kube-api-access-22f46\") pod \"barbican-operator-controller-manager-7ddb5c749-d8f8v\" (UID: \"ec3cd342-ebee-4689-a339-72ca3fd65506\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.501399 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.507166 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.508194 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.520128 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-hddtb"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.521894 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.522130 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5jnxs" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.523812 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rm4mz" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.542448 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-hddtb"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.575832 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.577091 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.585507 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kvw5r" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.600320 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfs8\" (UniqueName: \"kubernetes.io/projected/77cb4b5b-8911-40eb-9a0a-066503abf27f-kube-api-access-czfs8\") pod \"glance-operator-controller-manager-c6994669c-hddtb\" (UID: \"77cb4b5b-8911-40eb-9a0a-066503abf27f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.600418 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm47\" (UniqueName: \"kubernetes.io/projected/a7d9e99c-b2eb-481e-be87-a69b88b6609e-kube-api-access-fhm47\") pod \"cinder-operator-controller-manager-9b68f5989-b5mdw\" (UID: \"a7d9e99c-b2eb-481e-be87-a69b88b6609e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.600473 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22f46\" (UniqueName: \"kubernetes.io/projected/ec3cd342-ebee-4689-a339-72ca3fd65506-kube-api-access-22f46\") pod \"barbican-operator-controller-manager-7ddb5c749-d8f8v\" (UID: \"ec3cd342-ebee-4689-a339-72ca3fd65506\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.600513 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9w9\" (UniqueName: \"kubernetes.io/projected/00d7ea70-2b23-491d-841f-0513cdb3652f-kube-api-access-pt9w9\") pod \"designate-operator-controller-manager-9f958b845-m9lt8\" (UID: \"00d7ea70-2b23-491d-841f-0513cdb3652f\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.600534 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcch\" (UniqueName: \"kubernetes.io/projected/6ef85e8d-2997-4005-bcf3-7a99994402d0-kube-api-access-cfcch\") pod \"heat-operator-controller-manager-594c8c9d5d-kjsg4\" (UID: \"6ef85e8d-2997-4005-bcf3-7a99994402d0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.601978 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.602024 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.619594 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.620492 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.623410 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2z2mh" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.631195 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.632145 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.639961 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-bhvwg" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.640128 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.651829 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.655366 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm47\" (UniqueName: \"kubernetes.io/projected/a7d9e99c-b2eb-481e-be87-a69b88b6609e-kube-api-access-fhm47\") pod \"cinder-operator-controller-manager-9b68f5989-b5mdw\" (UID: \"a7d9e99c-b2eb-481e-be87-a69b88b6609e\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.658768 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22f46\" (UniqueName: \"kubernetes.io/projected/ec3cd342-ebee-4689-a339-72ca3fd65506-kube-api-access-22f46\") pod \"barbican-operator-controller-manager-7ddb5c749-d8f8v\" (UID: \"ec3cd342-ebee-4689-a339-72ca3fd65506\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.672358 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.673279 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.675641 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5s5jj" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.680185 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.683938 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.697358 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.698365 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.701781 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kljf\" (UniqueName: \"kubernetes.io/projected/af56a391-1d1e-4b94-8ec9-f1eb4f332995-kube-api-access-8kljf\") pod \"horizon-operator-controller-manager-77d5c5b54f-txncj\" (UID: \"af56a391-1d1e-4b94-8ec9-f1eb4f332995\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702062 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht9gd\" (UniqueName: \"kubernetes.io/projected/c5280c4a-bab8-4a47-8fb4-91aab130cd63-kube-api-access-ht9gd\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702264 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt9w9\" (UniqueName: \"kubernetes.io/projected/00d7ea70-2b23-491d-841f-0513cdb3652f-kube-api-access-pt9w9\") pod \"designate-operator-controller-manager-9f958b845-m9lt8\" (UID: \"00d7ea70-2b23-491d-841f-0513cdb3652f\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702384 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcch\" (UniqueName: \"kubernetes.io/projected/6ef85e8d-2997-4005-bcf3-7a99994402d0-kube-api-access-cfcch\") pod \"heat-operator-controller-manager-594c8c9d5d-kjsg4\" (UID: \"6ef85e8d-2997-4005-bcf3-7a99994402d0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702539 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702705 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfs8\" (UniqueName: \"kubernetes.io/projected/77cb4b5b-8911-40eb-9a0a-066503abf27f-kube-api-access-czfs8\") pod \"glance-operator-controller-manager-c6994669c-hddtb\" (UID: \"77cb4b5b-8911-40eb-9a0a-066503abf27f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.702837 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9zl\" (UniqueName: \"kubernetes.io/projected/a65f5625-37ea-46b9-9f9f-f0a9e608b890-kube-api-access-4s9zl\") pod \"ironic-operator-controller-manager-78757b4889-6w85k\" (UID: \"a65f5625-37ea-46b9-9f9f-f0a9e608b890\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.712205 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.713245 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.714143 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vgbxt" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.718384 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pswpw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.757835 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.770758 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.772695 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.776543 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-t79ns" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.786262 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.810581 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.811917 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht9gd\" (UniqueName: \"kubernetes.io/projected/c5280c4a-bab8-4a47-8fb4-91aab130cd63-kube-api-access-ht9gd\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812050 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl62c\" (UniqueName: \"kubernetes.io/projected/aaae8540-3604-4523-9f39-b8bf8fd1d03c-kube-api-access-wl62c\") pod \"mariadb-operator-controller-manager-c87fff755-tcgf7\" (UID: \"aaae8540-3604-4523-9f39-b8bf8fd1d03c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812129 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812190 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfzd\" (UniqueName: \"kubernetes.io/projected/e58e390d-227b-4d43-9216-c208196b0192-kube-api-access-4dfzd\") pod \"manila-operator-controller-manager-864f6b75bf-j5c58\" (UID: \"e58e390d-227b-4d43-9216-c208196b0192\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9zl\" (UniqueName: \"kubernetes.io/projected/a65f5625-37ea-46b9-9f9f-f0a9e608b890-kube-api-access-4s9zl\") pod \"ironic-operator-controller-manager-78757b4889-6w85k\" (UID: \"a65f5625-37ea-46b9-9f9f-f0a9e608b890\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwpnl\" (UniqueName: \"kubernetes.io/projected/3b13f8c5-634b-437a-9dc9-2bfbd854de9d-kube-api-access-mwpnl\") pod \"keystone-operator-controller-manager-767fdc4f47-f2jht\" (UID: \"3b13f8c5-634b-437a-9dc9-2bfbd854de9d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.812340 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kljf\" (UniqueName: \"kubernetes.io/projected/af56a391-1d1e-4b94-8ec9-f1eb4f332995-kube-api-access-8kljf\") pod \"horizon-operator-controller-manager-77d5c5b54f-txncj\" (UID: \"af56a391-1d1e-4b94-8ec9-f1eb4f332995\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:14:35 crc kubenswrapper[4893]: E0121 07:14:35.813183 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:35 crc kubenswrapper[4893]: E0121 07:14:35.813654 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:36.313615381 +0000 UTC m=+1217.543961283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.818096 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfs8\" (UniqueName: \"kubernetes.io/projected/77cb4b5b-8911-40eb-9a0a-066503abf27f-kube-api-access-czfs8\") pod \"glance-operator-controller-manager-c6994669c-hddtb\" (UID: \"77cb4b5b-8911-40eb-9a0a-066503abf27f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.825033 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.826334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcch\" (UniqueName: \"kubernetes.io/projected/6ef85e8d-2997-4005-bcf3-7a99994402d0-kube-api-access-cfcch\") pod \"heat-operator-controller-manager-594c8c9d5d-kjsg4\" (UID: \"6ef85e8d-2997-4005-bcf3-7a99994402d0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.841449 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt9w9\" (UniqueName: \"kubernetes.io/projected/00d7ea70-2b23-491d-841f-0513cdb3652f-kube-api-access-pt9w9\") pod \"designate-operator-controller-manager-9f958b845-m9lt8\" (UID: \"00d7ea70-2b23-491d-841f-0513cdb3652f\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.855500 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9zl\" (UniqueName: \"kubernetes.io/projected/a65f5625-37ea-46b9-9f9f-f0a9e608b890-kube-api-access-4s9zl\") pod \"ironic-operator-controller-manager-78757b4889-6w85k\" (UID: \"a65f5625-37ea-46b9-9f9f-f0a9e608b890\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.857137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.865104 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.866047 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.866047 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht9gd\" (UniqueName: \"kubernetes.io/projected/c5280c4a-bab8-4a47-8fb4-91aab130cd63-kube-api-access-ht9gd\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.881801 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kljf\" (UniqueName: \"kubernetes.io/projected/af56a391-1d1e-4b94-8ec9-f1eb4f332995-kube-api-access-8kljf\") pod \"horizon-operator-controller-manager-77d5c5b54f-txncj\" (UID: \"af56a391-1d1e-4b94-8ec9-f1eb4f332995\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.895612 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.897558 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.901298 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6x69t" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.911746 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.913305 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.914437 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwpnl\" (UniqueName: \"kubernetes.io/projected/3b13f8c5-634b-437a-9dc9-2bfbd854de9d-kube-api-access-mwpnl\") pod \"keystone-operator-controller-manager-767fdc4f47-f2jht\" (UID: \"3b13f8c5-634b-437a-9dc9-2bfbd854de9d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.914537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl62c\" (UniqueName: \"kubernetes.io/projected/aaae8540-3604-4523-9f39-b8bf8fd1d03c-kube-api-access-wl62c\") pod \"mariadb-operator-controller-manager-c87fff755-tcgf7\" (UID: \"aaae8540-3604-4523-9f39-b8bf8fd1d03c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.914626 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfzd\" (UniqueName: \"kubernetes.io/projected/e58e390d-227b-4d43-9216-c208196b0192-kube-api-access-4dfzd\") pod \"manila-operator-controller-manager-864f6b75bf-j5c58\" (UID: \"e58e390d-227b-4d43-9216-c208196b0192\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.914689 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgrgg\" (UniqueName: \"kubernetes.io/projected/3c023ffb-4503-4997-9fac-84414eb67f2e-kube-api-access-kgrgg\") pod \"neutron-operator-controller-manager-cb4666565-htcd2\" (UID: \"3c023ffb-4503-4997-9fac-84414eb67f2e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.915488 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.918899 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5rkbn" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.932055 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.937946 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl62c\" (UniqueName: \"kubernetes.io/projected/aaae8540-3604-4523-9f39-b8bf8fd1d03c-kube-api-access-wl62c\") pod \"mariadb-operator-controller-manager-c87fff755-tcgf7\" (UID: \"aaae8540-3604-4523-9f39-b8bf8fd1d03c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.940471 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwpnl\" (UniqueName: \"kubernetes.io/projected/3b13f8c5-634b-437a-9dc9-2bfbd854de9d-kube-api-access-mwpnl\") pod \"keystone-operator-controller-manager-767fdc4f47-f2jht\" (UID: \"3b13f8c5-634b-437a-9dc9-2bfbd854de9d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.940717 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfzd\" (UniqueName: \"kubernetes.io/projected/e58e390d-227b-4d43-9216-c208196b0192-kube-api-access-4dfzd\") pod \"manila-operator-controller-manager-864f6b75bf-j5c58\" (UID: \"e58e390d-227b-4d43-9216-c208196b0192\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.947470 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.957189 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.977888 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc"] Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.979075 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:14:35 crc kubenswrapper[4893]: I0121 07:14:35.982888 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xsxqp" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.001028 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.015929 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdk6\" (UniqueName: \"kubernetes.io/projected/31bc0fab-5394-4e78-a116-2d8d09736824-kube-api-access-stdk6\") pod \"octavia-operator-controller-manager-7fc9b76cf6-frxpc\" (UID: \"31bc0fab-5394-4e78-a116-2d8d09736824\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.016004 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgrgg\" (UniqueName: \"kubernetes.io/projected/3c023ffb-4503-4997-9fac-84414eb67f2e-kube-api-access-kgrgg\") pod \"neutron-operator-controller-manager-cb4666565-htcd2\" (UID: \"3c023ffb-4503-4997-9fac-84414eb67f2e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.016067 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54v26\" (UniqueName: \"kubernetes.io/projected/4b0f2392-37e2-447f-b542-e85bf4af7af9-kube-api-access-54v26\") pod \"nova-operator-controller-manager-65849867d6-g6gf8\" (UID: \"4b0f2392-37e2-447f-b542-e85bf4af7af9\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.018157 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.025576 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.026549 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.029644 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kp8tj" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.030473 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.031222 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.032507 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-s8sz2" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.047591 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.062884 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.063255 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.075731 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgrgg\" (UniqueName: \"kubernetes.io/projected/3c023ffb-4503-4997-9fac-84414eb67f2e-kube-api-access-kgrgg\") pod \"neutron-operator-controller-manager-cb4666565-htcd2\" (UID: \"3c023ffb-4503-4997-9fac-84414eb67f2e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.112900 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.114186 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.118962 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvjv2\" (UniqueName: \"kubernetes.io/projected/aad4ef7e-44ff-4da0-8a54-b8fb68017270-kube-api-access-cvjv2\") pod \"ovn-operator-controller-manager-55db956ddc-6gpxx\" (UID: \"aad4ef7e-44ff-4da0-8a54-b8fb68017270\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.119027 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54v26\" (UniqueName: \"kubernetes.io/projected/4b0f2392-37e2-447f-b542-e85bf4af7af9-kube-api-access-54v26\") pod \"nova-operator-controller-manager-65849867d6-g6gf8\" (UID: \"4b0f2392-37e2-447f-b542-e85bf4af7af9\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.119109 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdk6\" (UniqueName: \"kubernetes.io/projected/31bc0fab-5394-4e78-a116-2d8d09736824-kube-api-access-stdk6\") pod \"octavia-operator-controller-manager-7fc9b76cf6-frxpc\" (UID: \"31bc0fab-5394-4e78-a116-2d8d09736824\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.119139 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbjbl\" (UniqueName: \"kubernetes.io/projected/ac6cc898-5b96-4a0a-8014-bf17132e44fc-kube-api-access-sbjbl\") pod \"placement-operator-controller-manager-686df47fcb-bmm9s\" (UID: \"ac6cc898-5b96-4a0a-8014-bf17132e44fc\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.120782 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.121347 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2krnh" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.121531 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.123709 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.132347 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9f8v6" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.144072 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54v26\" (UniqueName: \"kubernetes.io/projected/4b0f2392-37e2-447f-b542-e85bf4af7af9-kube-api-access-54v26\") pod \"nova-operator-controller-manager-65849867d6-g6gf8\" (UID: \"4b0f2392-37e2-447f-b542-e85bf4af7af9\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.145152 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.149077 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdk6\" (UniqueName: \"kubernetes.io/projected/31bc0fab-5394-4e78-a116-2d8d09736824-kube-api-access-stdk6\") pod \"octavia-operator-controller-manager-7fc9b76cf6-frxpc\" (UID: \"31bc0fab-5394-4e78-a116-2d8d09736824\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.154497 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.156603 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.160262 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jdtlf" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.171731 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.283137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.284188 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.285720 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.285799 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hx9l\" (UniqueName: \"kubernetes.io/projected/f6fcb0d4-e51c-476f-9411-469bbdbd7f4e-kube-api-access-9hx9l\") pod \"swift-operator-controller-manager-85dd56d4cc-9hqln\" (UID: \"f6fcb0d4-e51c-476f-9411-469bbdbd7f4e\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.285832 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbjbl\" (UniqueName: \"kubernetes.io/projected/ac6cc898-5b96-4a0a-8014-bf17132e44fc-kube-api-access-sbjbl\") pod \"placement-operator-controller-manager-686df47fcb-bmm9s\" (UID: \"ac6cc898-5b96-4a0a-8014-bf17132e44fc\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.286059 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgh6\" (UniqueName: \"kubernetes.io/projected/4142220f-0688-47a2-9bec-d655f97fe3c6-kube-api-access-txgh6\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.287032 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.300470 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.303701 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.303787 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvjv2\" (UniqueName: \"kubernetes.io/projected/aad4ef7e-44ff-4da0-8a54-b8fb68017270-kube-api-access-cvjv2\") pod \"ovn-operator-controller-manager-55db956ddc-6gpxx\" (UID: \"aad4ef7e-44ff-4da0-8a54-b8fb68017270\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.303884 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfj4\" (UniqueName: \"kubernetes.io/projected/9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b-kube-api-access-hsfj4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v98wk\" (UID: \"9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.310540 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbjbl\" (UniqueName: \"kubernetes.io/projected/ac6cc898-5b96-4a0a-8014-bf17132e44fc-kube-api-access-sbjbl\") pod \"placement-operator-controller-manager-686df47fcb-bmm9s\" (UID: \"ac6cc898-5b96-4a0a-8014-bf17132e44fc\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.328361 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.340729 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvjv2\" (UniqueName: \"kubernetes.io/projected/aad4ef7e-44ff-4da0-8a54-b8fb68017270-kube-api-access-cvjv2\") pod \"ovn-operator-controller-manager-55db956ddc-6gpxx\" (UID: \"aad4ef7e-44ff-4da0-8a54-b8fb68017270\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.349118 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.373095 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.397712 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.401228 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.402451 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.405074 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-99rqt" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.407650 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.407730 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hx9l\" (UniqueName: \"kubernetes.io/projected/f6fcb0d4-e51c-476f-9411-469bbdbd7f4e-kube-api-access-9hx9l\") pod \"swift-operator-controller-manager-85dd56d4cc-9hqln\" (UID: \"f6fcb0d4-e51c-476f-9411-469bbdbd7f4e\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.407796 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgh6\" (UniqueName: \"kubernetes.io/projected/4142220f-0688-47a2-9bec-d655f97fe3c6-kube-api-access-txgh6\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.407841 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.407910 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfj4\" (UniqueName: \"kubernetes.io/projected/9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b-kube-api-access-hsfj4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v98wk\" (UID: \"9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.408316 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.408364 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:37.408346933 +0000 UTC m=+1218.638692835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.408981 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.409016 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:36.909004581 +0000 UTC m=+1218.139350483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.454707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfj4\" (UniqueName: \"kubernetes.io/projected/9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b-kube-api-access-hsfj4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v98wk\" (UID: \"9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.455089 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgh6\" (UniqueName: \"kubernetes.io/projected/4142220f-0688-47a2-9bec-d655f97fe3c6-kube-api-access-txgh6\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.465760 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.467135 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.470435 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hx9l\" (UniqueName: \"kubernetes.io/projected/f6fcb0d4-e51c-476f-9411-469bbdbd7f4e-kube-api-access-9hx9l\") pod \"swift-operator-controller-manager-85dd56d4cc-9hqln\" (UID: \"f6fcb0d4-e51c-476f-9411-469bbdbd7f4e\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.486471 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.497074 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mpfzp" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.509636 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672np\" (UniqueName: \"kubernetes.io/projected/271330f3-2299-491c-a7cc-56e7e4e5af9a-kube-api-access-672np\") pod \"watcher-operator-controller-manager-64cd966744-6ppkr\" (UID: \"271330f3-2299-491c-a7cc-56e7e4e5af9a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.509831 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdd4r\" (UniqueName: \"kubernetes.io/projected/12028a4c-13ac-46cd-862e-7a6e01614e1a-kube-api-access-tdd4r\") pod \"test-operator-controller-manager-7cd8bc9dbb-ccg72\" (UID: \"12028a4c-13ac-46cd-862e-7a6e01614e1a\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.690617 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.690852 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-672np\" (UniqueName: \"kubernetes.io/projected/271330f3-2299-491c-a7cc-56e7e4e5af9a-kube-api-access-672np\") pod \"watcher-operator-controller-manager-64cd966744-6ppkr\" (UID: \"271330f3-2299-491c-a7cc-56e7e4e5af9a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.692069 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.696044 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdd4r\" (UniqueName: \"kubernetes.io/projected/12028a4c-13ac-46cd-862e-7a6e01614e1a-kube-api-access-tdd4r\") pod \"test-operator-controller-manager-7cd8bc9dbb-ccg72\" (UID: \"12028a4c-13ac-46cd-862e-7a6e01614e1a\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.720265 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.722066 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.728132 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.731056 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdd4r\" (UniqueName: \"kubernetes.io/projected/12028a4c-13ac-46cd-862e-7a6e01614e1a-kube-api-access-tdd4r\") pod \"test-operator-controller-manager-7cd8bc9dbb-ccg72\" (UID: \"12028a4c-13ac-46cd-862e-7a6e01614e1a\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.738799 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.739551 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vlnlp" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.744275 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.749274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-672np\" (UniqueName: \"kubernetes.io/projected/271330f3-2299-491c-a7cc-56e7e4e5af9a-kube-api-access-672np\") pod \"watcher-operator-controller-manager-64cd966744-6ppkr\" (UID: \"271330f3-2299-491c-a7cc-56e7e4e5af9a\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.753761 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.754769 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.764328 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v"] Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.767870 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-vpfnj" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.925419 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.929940 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.930356 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmx65\" (UniqueName: \"kubernetes.io/projected/86f3a900-b203-4f96-b922-b7fdf0afab7b-kube-api-access-cmx65\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.930444 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.930515 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbj5\" (UniqueName: \"kubernetes.io/projected/bac4cdab-0839-4940-9a12-bb933e88a1da-kube-api-access-9rbj5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw22v\" (UID: \"bac4cdab-0839-4940-9a12-bb933e88a1da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.930537 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:36 crc kubenswrapper[4893]: I0121 07:14:36.930588 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.931206 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:36 crc kubenswrapper[4893]: E0121 07:14:36.931281 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:37.931258283 +0000 UTC m=+1219.161604185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.072040 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.072104 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmx65\" (UniqueName: \"kubernetes.io/projected/86f3a900-b203-4f96-b922-b7fdf0afab7b-kube-api-access-cmx65\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.072214 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbj5\" (UniqueName: \"kubernetes.io/projected/bac4cdab-0839-4940-9a12-bb933e88a1da-kube-api-access-9rbj5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw22v\" (UID: \"bac4cdab-0839-4940-9a12-bb933e88a1da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.072268 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.072442 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.072544 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:37.572514547 +0000 UTC m=+1218.802860449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.073038 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.073134 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:37.573057962 +0000 UTC m=+1218.803403864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.103815 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmx65\" (UniqueName: \"kubernetes.io/projected/86f3a900-b203-4f96-b922-b7fdf0afab7b-kube-api-access-cmx65\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.111113 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbj5\" (UniqueName: \"kubernetes.io/projected/bac4cdab-0839-4940-9a12-bb933e88a1da-kube-api-access-9rbj5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw22v\" (UID: \"bac4cdab-0839-4940-9a12-bb933e88a1da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.283465 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v"] Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.386652 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.478870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.479253 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.479470 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:39.479424255 +0000 UTC m=+1220.709770147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.641721 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.641803 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.641928 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.641976 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:38.641961476 +0000 UTC m=+1219.872307378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.642023 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.642099 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:38.642080509 +0000 UTC m=+1219.872426461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: I0121 07:14:37.954253 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.954424 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:37 crc kubenswrapper[4893]: E0121 07:14:37.954606 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:39.954588632 +0000 UTC m=+1221.184934534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.226055 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.376386 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" event={"ID":"ec3cd342-ebee-4689-a339-72ca3fd65506","Type":"ContainerStarted","Data":"196294a84173076aee60d6522e65f5ebae7c56ff7e73fd0946d0f8f726d2eebb"} Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.377431 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" event={"ID":"aaae8540-3604-4523-9f39-b8bf8fd1d03c","Type":"ContainerStarted","Data":"a85921fb363d49c1db1b07fa1a0916fc426d339226154a4bc65d1fcb714d0248"} Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.413856 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-hddtb"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.434060 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.451792 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8"] Jan 21 07:14:38 crc kubenswrapper[4893]: W0121 07:14:38.457377 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d7ea70_2b23_491d_841f_0513cdb3652f.slice/crio-36942a4be711bf2ac12e70e62fff9d5d64ee1ae49276c4835c247dfc6ce4d11a WatchSource:0}: Error finding container 36942a4be711bf2ac12e70e62fff9d5d64ee1ae49276c4835c247dfc6ce4d11a: Status 404 returned error can't find the container with id 36942a4be711bf2ac12e70e62fff9d5d64ee1ae49276c4835c247dfc6ce4d11a Jan 21 07:14:38 crc kubenswrapper[4893]: W0121 07:14:38.461788 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7d9e99c_b2eb_481e_be87_a69b88b6609e.slice/crio-2e978cdee65dcbf0d9d97befb63ec22d943c25980de23c98a47f13399dfd3c40 WatchSource:0}: Error finding container 2e978cdee65dcbf0d9d97befb63ec22d943c25980de23c98a47f13399dfd3c40: Status 404 returned error can't find the container with id 2e978cdee65dcbf0d9d97befb63ec22d943c25980de23c98a47f13399dfd3c40 Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.463612 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.474489 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.633006 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.664015 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.676137 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.676303 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.676409 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.676483 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:40.676463793 +0000 UTC m=+1221.906809695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.676615 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.676658 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:40.676645888 +0000 UTC m=+1221.906991790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.694187 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.705903 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.843454 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.851775 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v"] Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.865695 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s"] Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.874068 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9hx9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-9hqln_openstack-operators(f6fcb0d4-e51c-476f-9411-469bbdbd7f4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.874354 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwpnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-f2jht_openstack-operators(3b13f8c5-634b-437a-9dc9-2bfbd854de9d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.874557 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dfzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-j5c58_openstack-operators(e58e390d-227b-4d43-9216-c208196b0192): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.875520 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" podUID="3b13f8c5-634b-437a-9dc9-2bfbd854de9d" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.875613 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" podUID="f6fcb0d4-e51c-476f-9411-469bbdbd7f4e" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.875737 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" podUID="e58e390d-227b-4d43-9216-c208196b0192" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.887685 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54v26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-g6gf8_openstack-operators(4b0f2392-37e2-447f-b542-e85bf4af7af9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.887895 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht"] Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.888815 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podUID="4b0f2392-37e2-447f-b542-e85bf4af7af9" Jan 21 07:14:38 crc kubenswrapper[4893]: W0121 07:14:38.892859 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod271330f3_2299_491c_a7cc_56e7e4e5af9a.slice/crio-dada5f0a305e88bc4cd1b3ac11ddc50543b7c76903c433810d8242a2782268b4 WatchSource:0}: Error finding container dada5f0a305e88bc4cd1b3ac11ddc50543b7c76903c433810d8242a2782268b4: Status 404 returned error can't find the container with id dada5f0a305e88bc4cd1b3ac11ddc50543b7c76903c433810d8242a2782268b4 Jan 21 07:14:38 crc kubenswrapper[4893]: W0121 07:14:38.896912 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12028a4c_13ac_46cd_862e_7a6e01614e1a.slice/crio-ba22216de2d741d2717729bc9376b096a94d844f107fdc0a85e5d47c4276bd58 WatchSource:0}: Error finding container ba22216de2d741d2717729bc9376b096a94d844f107fdc0a85e5d47c4276bd58: Status 404 returned error can't find the container with id ba22216de2d741d2717729bc9376b096a94d844f107fdc0a85e5d47c4276bd58 Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.897357 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-672np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-6ppkr_openstack-operators(271330f3-2299-491c-a7cc-56e7e4e5af9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.898864 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" podUID="271330f3-2299-491c-a7cc-56e7e4e5af9a" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.901850 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdd4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-ccg72_openstack-operators(12028a4c-13ac-46cd-862e-7a6e01614e1a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 07:14:38 crc kubenswrapper[4893]: E0121 07:14:38.903231 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" podUID="12028a4c-13ac-46cd-862e-7a6e01614e1a" Jan 21 07:14:38 crc kubenswrapper[4893]: I0121 07:14:38.904333 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk"] Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.355215 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58"] Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.376127 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8"] Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.388558 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72"] Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.407009 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" event={"ID":"12028a4c-13ac-46cd-862e-7a6e01614e1a","Type":"ContainerStarted","Data":"ba22216de2d741d2717729bc9376b096a94d844f107fdc0a85e5d47c4276bd58"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.408376 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" podUID="12028a4c-13ac-46cd-862e-7a6e01614e1a" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.408904 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" event={"ID":"a7d9e99c-b2eb-481e-be87-a69b88b6609e","Type":"ContainerStarted","Data":"2e978cdee65dcbf0d9d97befb63ec22d943c25980de23c98a47f13399dfd3c40"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.409335 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr"] Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.436215 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" event={"ID":"77cb4b5b-8911-40eb-9a0a-066503abf27f","Type":"ContainerStarted","Data":"7822ab9a4241762560a69d7845cb63351152f95964e9d9c87a97ea20ee251287"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.460014 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" event={"ID":"3c023ffb-4503-4997-9fac-84414eb67f2e","Type":"ContainerStarted","Data":"23e0c538bd6710e87678534f892b54a97a7badff3fc18b0c0d9de5b03df1ce67"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.478208 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" event={"ID":"3b13f8c5-634b-437a-9dc9-2bfbd854de9d","Type":"ContainerStarted","Data":"f51d00bed038b2d6e6fe5730b6bd218ef25e9126e267bbc891dfe8ba8a3a7d90"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.481367 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" podUID="3b13f8c5-634b-437a-9dc9-2bfbd854de9d" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.485403 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" event={"ID":"00d7ea70-2b23-491d-841f-0513cdb3652f","Type":"ContainerStarted","Data":"36942a4be711bf2ac12e70e62fff9d5d64ee1ae49276c4835c247dfc6ce4d11a"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.491317 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" event={"ID":"9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b","Type":"ContainerStarted","Data":"8b845c4aad911807359a137ae067792582b36f409e8761e314e000cbfd9bef7d"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.493072 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" event={"ID":"6ef85e8d-2997-4005-bcf3-7a99994402d0","Type":"ContainerStarted","Data":"54c16e34bfcbbd81b5fa9a00158fd089eefa327332c614c6d5ee739ff36cac91"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.494286 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" event={"ID":"f6fcb0d4-e51c-476f-9411-469bbdbd7f4e","Type":"ContainerStarted","Data":"a00b7cbf804db8037b8bd4086837dd5322a0a1f8380873a41cb5a475abe24b0e"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.495137 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" event={"ID":"aad4ef7e-44ff-4da0-8a54-b8fb68017270","Type":"ContainerStarted","Data":"b282f4d49883a882ca5cd623da763744d536e3ac9e7c6657284d7032670f970f"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.496154 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" podUID="f6fcb0d4-e51c-476f-9411-469bbdbd7f4e" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.499319 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" event={"ID":"a65f5625-37ea-46b9-9f9f-f0a9e608b890","Type":"ContainerStarted","Data":"af3eae35d450a750a96c1d27f881853e8d3686955beae6217ff801acab3c7ba4"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.505942 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" event={"ID":"ac6cc898-5b96-4a0a-8014-bf17132e44fc","Type":"ContainerStarted","Data":"979e7185977299b9ac16006cfd4cc2bc2923051637fb976cd8d53ba3fca9fa25"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.510639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" event={"ID":"4b0f2392-37e2-447f-b542-e85bf4af7af9","Type":"ContainerStarted","Data":"b39f5fa1c069d99b23bf8e1c8e8eda059d7e22793d55dc6db986fe0903771a8a"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.512638 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" event={"ID":"bac4cdab-0839-4940-9a12-bb933e88a1da","Type":"ContainerStarted","Data":"1047cc12db304c4c32af8bcfa553b929db835519751883c526a23c7688e48182"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.512640 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podUID="4b0f2392-37e2-447f-b542-e85bf4af7af9" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.513706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" event={"ID":"31bc0fab-5394-4e78-a116-2d8d09736824","Type":"ContainerStarted","Data":"950bcaf7c9562dcfaf845431b97b08888cf8170fc5a1d260e881ffb53d144029"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.523488 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" event={"ID":"271330f3-2299-491c-a7cc-56e7e4e5af9a","Type":"ContainerStarted","Data":"dada5f0a305e88bc4cd1b3ac11ddc50543b7c76903c433810d8242a2782268b4"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.527780 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" podUID="271330f3-2299-491c-a7cc-56e7e4e5af9a" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.528116 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" event={"ID":"e58e390d-227b-4d43-9216-c208196b0192","Type":"ContainerStarted","Data":"98fcc79f8e5991994acc89a9274edf1850e8ee04104902e6da9772a688f05166"} Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.531253 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" podUID="e58e390d-227b-4d43-9216-c208196b0192" Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.532178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" event={"ID":"af56a391-1d1e-4b94-8ec9-f1eb4f332995","Type":"ContainerStarted","Data":"d6566cf08154b6aa9423b128a6c4fbc61099576efe8bfab7e48eecf4fc756843"} Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.559079 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.559258 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.559317 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:43.559299101 +0000 UTC m=+1224.789645003 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:39 crc kubenswrapper[4893]: I0121 07:14:39.969039 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.969442 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:39 crc kubenswrapper[4893]: E0121 07:14:39.969536 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:43.969509824 +0000 UTC m=+1225.199855786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.588534 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" podUID="12028a4c-13ac-46cd-862e-7a6e01614e1a" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.590156 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" podUID="271330f3-2299-491c-a7cc-56e7e4e5af9a" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.600341 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" podUID="e58e390d-227b-4d43-9216-c208196b0192" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.603586 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podUID="4b0f2392-37e2-447f-b542-e85bf4af7af9" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.603999 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" podUID="3b13f8c5-634b-437a-9dc9-2bfbd854de9d" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.629753 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" podUID="f6fcb0d4-e51c-476f-9411-469bbdbd7f4e" Jan 21 07:14:40 crc kubenswrapper[4893]: I0121 07:14:40.692121 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:40 crc kubenswrapper[4893]: I0121 07:14:40.692295 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.693027 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.693080 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:44.693062743 +0000 UTC m=+1225.923408645 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.694309 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:40 crc kubenswrapper[4893]: E0121 07:14:40.694347 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:44.69433782 +0000 UTC m=+1225.924683722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:43 crc kubenswrapper[4893]: I0121 07:14:43.635400 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:43 crc kubenswrapper[4893]: E0121 07:14:43.635824 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:43 crc kubenswrapper[4893]: E0121 07:14:43.636101 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:51.636039564 +0000 UTC m=+1232.866385466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: I0121 07:14:44.048190 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.048386 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.048553 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:14:52.048526631 +0000 UTC m=+1233.278872533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: I0121 07:14:44.721522 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:44 crc kubenswrapper[4893]: I0121 07:14:44.721643 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.721732 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.721825 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:52.721794625 +0000 UTC m=+1233.952140527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.721849 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:44 crc kubenswrapper[4893]: E0121 07:14:44.721893 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:14:52.721880687 +0000 UTC m=+1233.952226589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:51 crc kubenswrapper[4893]: I0121 07:14:51.688630 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:14:51 crc kubenswrapper[4893]: E0121 07:14:51.688814 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:51 crc kubenswrapper[4893]: E0121 07:14:51.689394 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert podName:c5280c4a-bab8-4a47-8fb4-91aab130cd63 nodeName:}" failed. No retries permitted until 2026-01-21 07:15:07.689374967 +0000 UTC m=+1248.919720869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert") pod "infra-operator-controller-manager-77c48c7859-xjlf7" (UID: "c5280c4a-bab8-4a47-8fb4-91aab130cd63") : secret "infra-operator-webhook-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: I0121 07:14:52.162402 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.162990 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.163099 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert podName:4142220f-0688-47a2-9bec-d655f97fe3c6 nodeName:}" failed. No retries permitted until 2026-01-21 07:15:08.163067192 +0000 UTC m=+1249.393413104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986d69gjw" (UID: "4142220f-0688-47a2-9bec-d655f97fe3c6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: I0121 07:14:52.729269 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:52 crc kubenswrapper[4893]: I0121 07:14:52.729436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.730129 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.730172 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:15:08.730156584 +0000 UTC m=+1249.960502486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "webhook-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.731186 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 07:14:52 crc kubenswrapper[4893]: E0121 07:14:52.731426 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs podName:86f3a900-b203-4f96-b922-b7fdf0afab7b nodeName:}" failed. No retries permitted until 2026-01-21 07:15:08.731413109 +0000 UTC m=+1249.961759021 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-2dz2q" (UID: "86f3a900-b203-4f96-b922-b7fdf0afab7b") : secret "metrics-server-cert" not found Jan 21 07:14:53 crc kubenswrapper[4893]: E0121 07:14:53.151947 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 07:14:53 crc kubenswrapper[4893]: E0121 07:14:53.152340 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvjv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-6gpxx_openstack-operators(aad4ef7e-44ff-4da0-8a54-b8fb68017270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:14:53 crc kubenswrapper[4893]: E0121 07:14:53.153548 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" podUID="aad4ef7e-44ff-4da0-8a54-b8fb68017270" Jan 21 07:14:53 crc kubenswrapper[4893]: E0121 07:14:53.508400 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" podUID="aad4ef7e-44ff-4da0-8a54-b8fb68017270" Jan 21 07:14:58 crc kubenswrapper[4893]: I0121 07:14:58.657203 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:14:58 crc kubenswrapper[4893]: I0121 07:14:58.658104 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:14:58 crc kubenswrapper[4893]: I0121 07:14:58.658159 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:14:58 crc kubenswrapper[4893]: I0121 07:14:58.659019 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:14:58 crc kubenswrapper[4893]: I0121 07:14:58.659120 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2" gracePeriod=600 Jan 21 07:14:58 crc kubenswrapper[4893]: E0121 07:14:58.805345 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 07:14:58 crc kubenswrapper[4893]: E0121 07:14:58.805529 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhm47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-b5mdw_openstack-operators(a7d9e99c-b2eb-481e-be87-a69b88b6609e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:14:58 crc kubenswrapper[4893]: E0121 07:14:58.807183 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" podUID="a7d9e99c-b2eb-481e-be87-a69b88b6609e" Jan 21 07:14:59 crc kubenswrapper[4893]: I0121 07:14:59.744226 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2" exitCode=0 Jan 21 07:14:59 crc kubenswrapper[4893]: I0121 07:14:59.744813 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2"} Jan 21 07:14:59 crc kubenswrapper[4893]: I0121 07:14:59.744928 4893 scope.go:117] "RemoveContainer" containerID="bea12aa0e3fb7f6eeacad68b0257846807fe6f0e84a4345e0ec5d7edb930ef7f" Jan 21 07:14:59 crc kubenswrapper[4893]: E0121 07:14:59.745561 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" podUID="a7d9e99c-b2eb-481e-be87-a69b88b6609e" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.150888 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9"] Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.153055 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.156302 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.156979 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.164294 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9"] Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.190185 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb7pd\" (UniqueName: \"kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.190236 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.190288 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.317484 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb7pd\" (UniqueName: \"kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.317536 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.317585 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.318820 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.326381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.399386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb7pd\" (UniqueName: \"kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd\") pod \"collect-profiles-29482995-kdkr9\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: I0121 07:15:00.493459 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:00 crc kubenswrapper[4893]: E0121 07:15:00.585230 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 07:15:00 crc kubenswrapper[4893]: E0121 07:15:00.585759 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pt9w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-m9lt8_openstack-operators(00d7ea70-2b23-491d-841f-0513cdb3652f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:00 crc kubenswrapper[4893]: E0121 07:15:00.587007 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" podUID="00d7ea70-2b23-491d-841f-0513cdb3652f" Jan 21 07:15:00 crc kubenswrapper[4893]: E0121 07:15:00.752746 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" podUID="00d7ea70-2b23-491d-841f-0513cdb3652f" Jan 21 07:15:03 crc kubenswrapper[4893]: E0121 07:15:03.703225 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 07:15:03 crc kubenswrapper[4893]: E0121 07:15:03.703793 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stdk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-frxpc_openstack-operators(31bc0fab-5394-4e78-a116-2d8d09736824): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:03 crc kubenswrapper[4893]: E0121 07:15:03.706359 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" podUID="31bc0fab-5394-4e78-a116-2d8d09736824" Jan 21 07:15:03 crc kubenswrapper[4893]: E0121 07:15:03.845697 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" podUID="31bc0fab-5394-4e78-a116-2d8d09736824" Jan 21 07:15:04 crc kubenswrapper[4893]: E0121 07:15:04.562109 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 07:15:04 crc kubenswrapper[4893]: E0121 07:15:04.562415 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbjbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-bmm9s_openstack-operators(ac6cc898-5b96-4a0a-8014-bf17132e44fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:04 crc kubenswrapper[4893]: E0121 07:15:04.564508 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" podUID="ac6cc898-5b96-4a0a-8014-bf17132e44fc" Jan 21 07:15:04 crc kubenswrapper[4893]: E0121 07:15:04.907467 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" podUID="ac6cc898-5b96-4a0a-8014-bf17132e44fc" Jan 21 07:15:05 crc kubenswrapper[4893]: E0121 07:15:05.571002 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 07:15:05 crc kubenswrapper[4893]: E0121 07:15:05.571298 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22f46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-d8f8v_openstack-operators(ec3cd342-ebee-4689-a339-72ca3fd65506): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:05 crc kubenswrapper[4893]: E0121 07:15:05.572707 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" podUID="ec3cd342-ebee-4689-a339-72ca3fd65506" Jan 21 07:15:05 crc kubenswrapper[4893]: E0121 07:15:05.913131 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" podUID="ec3cd342-ebee-4689-a339-72ca3fd65506" Jan 21 07:15:06 crc kubenswrapper[4893]: E0121 07:15:06.554505 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 07:15:06 crc kubenswrapper[4893]: E0121 07:15:06.554770 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czfs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-hddtb_openstack-operators(77cb4b5b-8911-40eb-9a0a-066503abf27f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:06 crc kubenswrapper[4893]: E0121 07:15:06.556420 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" podUID="77cb4b5b-8911-40eb-9a0a-066503abf27f" Jan 21 07:15:06 crc kubenswrapper[4893]: I0121 07:15:06.582647 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:15:06 crc kubenswrapper[4893]: E0121 07:15:06.925075 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" podUID="77cb4b5b-8911-40eb-9a0a-066503abf27f" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.201505 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.201795 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgrgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-htcd2_openstack-operators(3c023ffb-4503-4997-9fac-84414eb67f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.203731 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" podUID="3c023ffb-4503-4997-9fac-84414eb67f2e" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.691834 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.692101 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9rbj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zw22v_openstack-operators(bac4cdab-0839-4940-9a12-bb933e88a1da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.693315 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" podUID="bac4cdab-0839-4940-9a12-bb933e88a1da" Jan 21 07:15:07 crc kubenswrapper[4893]: I0121 07:15:07.716188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:15:07 crc kubenswrapper[4893]: I0121 07:15:07.738980 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5280c4a-bab8-4a47-8fb4-91aab130cd63-cert\") pod \"infra-operator-controller-manager-77c48c7859-xjlf7\" (UID: \"c5280c4a-bab8-4a47-8fb4-91aab130cd63\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:15:07 crc kubenswrapper[4893]: I0121 07:15:07.836297 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.935038 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" podUID="bac4cdab-0839-4940-9a12-bb933e88a1da" Jan 21 07:15:07 crc kubenswrapper[4893]: E0121 07:15:07.935041 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" podUID="3c023ffb-4503-4997-9fac-84414eb67f2e" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.225228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.229627 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4142220f-0688-47a2-9bec-d655f97fe3c6-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986d69gjw\" (UID: \"4142220f-0688-47a2-9bec-d655f97fe3c6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.402972 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.736007 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.736153 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.739878 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.739971 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86f3a900-b203-4f96-b922-b7fdf0afab7b-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-2dz2q\" (UID: \"86f3a900-b203-4f96-b922-b7fdf0afab7b\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:08 crc kubenswrapper[4893]: I0121 07:15:08.787433 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:14 crc kubenswrapper[4893]: E0121 07:15:14.171488 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 07:15:14 crc kubenswrapper[4893]: E0121 07:15:14.172226 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54v26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-g6gf8_openstack-operators(4b0f2392-37e2-447f-b542-e85bf4af7af9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:15:14 crc kubenswrapper[4893]: E0121 07:15:14.173454 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podUID="4b0f2392-37e2-447f-b542-e85bf4af7af9" Jan 21 07:15:15 crc kubenswrapper[4893]: I0121 07:15:15.089261 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9"] Jan 21 07:15:15 crc kubenswrapper[4893]: I0121 07:15:15.106037 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q"] Jan 21 07:15:15 crc kubenswrapper[4893]: I0121 07:15:15.321525 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw"] Jan 21 07:15:15 crc kubenswrapper[4893]: I0121 07:15:15.434766 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7"] Jan 21 07:15:15 crc kubenswrapper[4893]: W0121 07:15:15.453765 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5280c4a_bab8_4a47_8fb4_91aab130cd63.slice/crio-bcd9a922e10090e7deebc0e16b58cfb38fea71710683b94488ae0dc9ade0ab83 WatchSource:0}: Error finding container bcd9a922e10090e7deebc0e16b58cfb38fea71710683b94488ae0dc9ade0ab83: Status 404 returned error can't find the container with id bcd9a922e10090e7deebc0e16b58cfb38fea71710683b94488ae0dc9ade0ab83 Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.210753 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.228948 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" event={"ID":"86f3a900-b203-4f96-b922-b7fdf0afab7b","Type":"ContainerStarted","Data":"2f8ba671ebc46e5c8049a6e9739f1d01240ce2a09740d170a2ca8ba068b6c73e"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.229276 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" event={"ID":"86f3a900-b203-4f96-b922-b7fdf0afab7b","Type":"ContainerStarted","Data":"0ece99bafe586a97959482865a6813e6ac6abd2b416ccc8e16dd141c367a943b"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.229314 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.327069 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" event={"ID":"6ef85e8d-2997-4005-bcf3-7a99994402d0","Type":"ContainerStarted","Data":"8456c3c15233650e202354c1e00f79bf81bb727ace15d5a1d7d88bb21423f97a"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.327916 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.334874 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" event={"ID":"3b13f8c5-634b-437a-9dc9-2bfbd854de9d","Type":"ContainerStarted","Data":"72af0379215a56cde1aacbf4dc3ce18c3ad6ab44afd8bb299adf17e1ff093801"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.335593 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.360364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" event={"ID":"271330f3-2299-491c-a7cc-56e7e4e5af9a","Type":"ContainerStarted","Data":"450936d75927dbf18b1ff8b66a94e0e2f168b82c5f9a53ae3a233d5df22ac5fe"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.361152 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.362285 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" event={"ID":"af56a391-1d1e-4b94-8ec9-f1eb4f332995","Type":"ContainerStarted","Data":"7eccdeb13a889d38c4b43b7516fe3ea78c34f9565961272489c4674726ce501d"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.362873 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.371956 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" event={"ID":"43a75791-1765-4b28-81d7-9baddda40b7c","Type":"ContainerStarted","Data":"92dc6e4c20c00232792f3a8eb0d27902f12d85fb95fbcb1feac828c5bceb0925"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.371991 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" event={"ID":"43a75791-1765-4b28-81d7-9baddda40b7c","Type":"ContainerStarted","Data":"af8a7f806712c0b7c8837cf11a754c173a80d2a665f3846c509208d91ecb763f"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.374341 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" event={"ID":"aaae8540-3604-4523-9f39-b8bf8fd1d03c","Type":"ContainerStarted","Data":"ec0a4cea6482857ad71e40f038b7bd77e8770da87bf5be046008fddb47c497c1"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.374964 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.376432 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" event={"ID":"aad4ef7e-44ff-4da0-8a54-b8fb68017270","Type":"ContainerStarted","Data":"1faa79b368a8a7f6c766754a408eb657ebe7dd1cb9422c6a2b0a509195d0167e"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.377001 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.378261 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" event={"ID":"9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b","Type":"ContainerStarted","Data":"b3677401fd46786d2a536e539b306805560b65b20175d82143a72e66bc2503bd"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.378747 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.389695 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" event={"ID":"e58e390d-227b-4d43-9216-c208196b0192","Type":"ContainerStarted","Data":"3ce1b9a6237dd5c183c73afcbaa57c11da502ecaba9e07f9c4dcf6b71fc26320"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.390528 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.399840 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" event={"ID":"12028a4c-13ac-46cd-862e-7a6e01614e1a","Type":"ContainerStarted","Data":"25fb928a230d7c09be944951a940ee85c3319a1780c821885eefd0598a8d0026"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.400584 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.428854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" event={"ID":"4142220f-0688-47a2-9bec-d655f97fe3c6","Type":"ContainerStarted","Data":"effd5c4cf358afbd2b1be99b83f8d95f326150e7bdc749a1a3c7dc1cec51916c"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.440188 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" podStartSLOduration=40.44014957 podStartE2EDuration="40.44014957s" podCreationTimestamp="2026-01-21 07:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:15:16.429451314 +0000 UTC m=+1257.659797226" watchObservedRunningTime="2026-01-21 07:15:16.44014957 +0000 UTC m=+1257.670495472" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.473118 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" podStartSLOduration=5.822737416 podStartE2EDuration="41.47309191s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.873762507 +0000 UTC m=+1220.104108409" lastFinishedPulling="2026-01-21 07:15:14.524117001 +0000 UTC m=+1255.754462903" observedRunningTime="2026-01-21 07:15:16.466221264 +0000 UTC m=+1257.696567176" watchObservedRunningTime="2026-01-21 07:15:16.47309191 +0000 UTC m=+1257.703437812" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.492249 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" event={"ID":"c5280c4a-bab8-4a47-8fb4-91aab130cd63","Type":"ContainerStarted","Data":"bcd9a922e10090e7deebc0e16b58cfb38fea71710683b94488ae0dc9ade0ab83"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.532272 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" event={"ID":"a65f5625-37ea-46b9-9f9f-f0a9e608b890","Type":"ContainerStarted","Data":"53abf1669058e18952c8081c61bc866e09f12746d77bafb558311c4accc8e81b"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.533037 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.544641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" event={"ID":"a7d9e99c-b2eb-481e-be87-a69b88b6609e","Type":"ContainerStarted","Data":"ee066dfa7f3d2048baa918be143c73d9b8d51765390130708b05335a22ce2957"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.545376 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.576114 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" event={"ID":"f6fcb0d4-e51c-476f-9411-469bbdbd7f4e","Type":"ContainerStarted","Data":"22221066e9567649297eb17641a8da786165623a281ba83c10972e0b80f5b589"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.577081 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.671841 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" podStartSLOduration=10.463312676 podStartE2EDuration="41.671818584s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.469463953 +0000 UTC m=+1219.699809855" lastFinishedPulling="2026-01-21 07:15:09.677969861 +0000 UTC m=+1250.908315763" observedRunningTime="2026-01-21 07:15:16.58199631 +0000 UTC m=+1257.812342212" watchObservedRunningTime="2026-01-21 07:15:16.671818584 +0000 UTC m=+1257.902164476" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.725021 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" event={"ID":"00d7ea70-2b23-491d-841f-0513cdb3652f","Type":"ContainerStarted","Data":"259d8675f9e6501534316c4ccd6a5c7f473d2235695ef00a095b2df8c17f3301"} Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.725891 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.892026 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" podStartSLOduration=11.057741679 podStartE2EDuration="41.892002721s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.843729959 +0000 UTC m=+1220.074075861" lastFinishedPulling="2026-01-21 07:15:09.677990981 +0000 UTC m=+1250.908336903" observedRunningTime="2026-01-21 07:15:16.821823118 +0000 UTC m=+1258.052169020" watchObservedRunningTime="2026-01-21 07:15:16.892002721 +0000 UTC m=+1258.122348623" Jan 21 07:15:16 crc kubenswrapper[4893]: I0121 07:15:16.982979 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" podStartSLOduration=6.741140678 podStartE2EDuration="41.982954758s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.901591021 +0000 UTC m=+1220.131936923" lastFinishedPulling="2026-01-21 07:15:14.143405101 +0000 UTC m=+1255.373751003" observedRunningTime="2026-01-21 07:15:16.979433378 +0000 UTC m=+1258.209779280" watchObservedRunningTime="2026-01-21 07:15:16.982954758 +0000 UTC m=+1258.213300660" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.140463 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" podStartSLOduration=7.745543567 podStartE2EDuration="42.140433485s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.873719625 +0000 UTC m=+1220.104065527" lastFinishedPulling="2026-01-21 07:15:13.268609533 +0000 UTC m=+1254.498955445" observedRunningTime="2026-01-21 07:15:17.135693429 +0000 UTC m=+1258.366039331" watchObservedRunningTime="2026-01-21 07:15:17.140433485 +0000 UTC m=+1258.370779387" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.151268 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" podStartSLOduration=7.536283571 podStartE2EDuration="42.151245233s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.649647117 +0000 UTC m=+1219.879993019" lastFinishedPulling="2026-01-21 07:15:13.264608779 +0000 UTC m=+1254.494954681" observedRunningTime="2026-01-21 07:15:17.018653208 +0000 UTC m=+1258.248999120" watchObservedRunningTime="2026-01-21 07:15:17.151245233 +0000 UTC m=+1258.381591135" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.328885 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" podStartSLOduration=17.328867735 podStartE2EDuration="17.328867735s" podCreationTimestamp="2026-01-21 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:15:17.236609621 +0000 UTC m=+1258.466955513" watchObservedRunningTime="2026-01-21 07:15:17.328867735 +0000 UTC m=+1258.559213627" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.382708 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" podStartSLOduration=6.304576433 podStartE2EDuration="42.382664461s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.46516635 +0000 UTC m=+1219.695512262" lastFinishedPulling="2026-01-21 07:15:14.543254388 +0000 UTC m=+1255.773600290" observedRunningTime="2026-01-21 07:15:17.32625154 +0000 UTC m=+1258.556597442" watchObservedRunningTime="2026-01-21 07:15:17.382664461 +0000 UTC m=+1258.613010363" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.475179 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" podStartSLOduration=11.463372379 podStartE2EDuration="42.475159041s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.667081385 +0000 UTC m=+1219.897427287" lastFinishedPulling="2026-01-21 07:15:09.678868047 +0000 UTC m=+1250.909213949" observedRunningTime="2026-01-21 07:15:17.396938899 +0000 UTC m=+1258.627284801" watchObservedRunningTime="2026-01-21 07:15:17.475159041 +0000 UTC m=+1258.705504943" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.616845 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" podStartSLOduration=6.370690973 podStartE2EDuration="41.616821836s" podCreationTimestamp="2026-01-21 07:14:36 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.896884047 +0000 UTC m=+1220.127229949" lastFinishedPulling="2026-01-21 07:15:14.14301491 +0000 UTC m=+1255.373360812" observedRunningTime="2026-01-21 07:15:17.549935636 +0000 UTC m=+1258.780281538" watchObservedRunningTime="2026-01-21 07:15:17.616821836 +0000 UTC m=+1258.847167738" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.632719 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" podStartSLOduration=11.410423358 podStartE2EDuration="42.632694189s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.455853174 +0000 UTC m=+1219.686199076" lastFinishedPulling="2026-01-21 07:15:09.678124005 +0000 UTC m=+1250.908469907" observedRunningTime="2026-01-21 07:15:17.607332865 +0000 UTC m=+1258.837678767" watchObservedRunningTime="2026-01-21 07:15:17.632694189 +0000 UTC m=+1258.863040091" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.789098 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" podStartSLOduration=11.364298011 podStartE2EDuration="42.789072414s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.253299361 +0000 UTC m=+1219.483645263" lastFinishedPulling="2026-01-21 07:15:09.678073744 +0000 UTC m=+1250.908419666" observedRunningTime="2026-01-21 07:15:17.786699307 +0000 UTC m=+1259.017045209" watchObservedRunningTime="2026-01-21 07:15:17.789072414 +0000 UTC m=+1259.019418316" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.863833 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" event={"ID":"31bc0fab-5394-4e78-a116-2d8d09736824","Type":"ContainerStarted","Data":"79d9e1212df1fce1f9cbeab4a16891bbaad6ba145586e4ccc96b2537cf72647c"} Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.864101 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.871747 4893 generic.go:334] "Generic (PLEG): container finished" podID="43a75791-1765-4b28-81d7-9baddda40b7c" containerID="92dc6e4c20c00232792f3a8eb0d27902f12d85fb95fbcb1feac828c5bceb0925" exitCode=0 Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.871966 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" event={"ID":"43a75791-1765-4b28-81d7-9baddda40b7c","Type":"ContainerDied","Data":"92dc6e4c20c00232792f3a8eb0d27902f12d85fb95fbcb1feac828c5bceb0925"} Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.891215 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" podStartSLOduration=5.980631924 podStartE2EDuration="42.89117665s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.461375512 +0000 UTC m=+1219.691721414" lastFinishedPulling="2026-01-21 07:15:15.371920248 +0000 UTC m=+1256.602266140" observedRunningTime="2026-01-21 07:15:17.874963607 +0000 UTC m=+1259.105309509" watchObservedRunningTime="2026-01-21 07:15:17.89117665 +0000 UTC m=+1259.121522552" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.891640 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" podStartSLOduration=8.502466819 podStartE2EDuration="42.891625903s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.873794638 +0000 UTC m=+1220.104140540" lastFinishedPulling="2026-01-21 07:15:13.262953722 +0000 UTC m=+1254.493299624" observedRunningTime="2026-01-21 07:15:17.847211744 +0000 UTC m=+1259.077557636" watchObservedRunningTime="2026-01-21 07:15:17.891625903 +0000 UTC m=+1259.121971815" Jan 21 07:15:17 crc kubenswrapper[4893]: I0121 07:15:17.944436 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" podStartSLOduration=5.215208488 podStartE2EDuration="42.94441023s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.678778299 +0000 UTC m=+1219.909124201" lastFinishedPulling="2026-01-21 07:15:16.407980041 +0000 UTC m=+1257.638325943" observedRunningTime="2026-01-21 07:15:17.941246509 +0000 UTC m=+1259.171592411" watchObservedRunningTime="2026-01-21 07:15:17.94441023 +0000 UTC m=+1259.174756132" Jan 21 07:15:18 crc kubenswrapper[4893]: I0121 07:15:18.887171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" event={"ID":"ac6cc898-5b96-4a0a-8014-bf17132e44fc","Type":"ContainerStarted","Data":"bf1425711d0c13af624a3cc1fa4e63b3745b37a3b0918669b5bb1a13ab8b5a58"} Jan 21 07:15:18 crc kubenswrapper[4893]: I0121 07:15:18.911135 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" podStartSLOduration=5.170469422 podStartE2EDuration="43.911112812s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.858239544 +0000 UTC m=+1220.088585446" lastFinishedPulling="2026-01-21 07:15:17.598882924 +0000 UTC m=+1258.829228836" observedRunningTime="2026-01-21 07:15:18.906264073 +0000 UTC m=+1260.136609975" watchObservedRunningTime="2026-01-21 07:15:18.911112812 +0000 UTC m=+1260.141458714" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.659699 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.714068 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume\") pod \"43a75791-1765-4b28-81d7-9baddda40b7c\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.714159 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb7pd\" (UniqueName: \"kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd\") pod \"43a75791-1765-4b28-81d7-9baddda40b7c\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.714223 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume\") pod \"43a75791-1765-4b28-81d7-9baddda40b7c\" (UID: \"43a75791-1765-4b28-81d7-9baddda40b7c\") " Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.715117 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume" (OuterVolumeSpecName: "config-volume") pod "43a75791-1765-4b28-81d7-9baddda40b7c" (UID: "43a75791-1765-4b28-81d7-9baddda40b7c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.720883 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43a75791-1765-4b28-81d7-9baddda40b7c" (UID: "43a75791-1765-4b28-81d7-9baddda40b7c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.721148 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd" (OuterVolumeSpecName: "kube-api-access-zb7pd") pod "43a75791-1765-4b28-81d7-9baddda40b7c" (UID: "43a75791-1765-4b28-81d7-9baddda40b7c"). InnerVolumeSpecName "kube-api-access-zb7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.850254 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43a75791-1765-4b28-81d7-9baddda40b7c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.850311 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb7pd\" (UniqueName: \"kubernetes.io/projected/43a75791-1765-4b28-81d7-9baddda40b7c-kube-api-access-zb7pd\") on node \"crc\" DevicePath \"\"" Jan 21 07:15:20 crc kubenswrapper[4893]: I0121 07:15:20.850335 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43a75791-1765-4b28-81d7-9baddda40b7c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:15:21 crc kubenswrapper[4893]: I0121 07:15:21.433228 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" event={"ID":"43a75791-1765-4b28-81d7-9baddda40b7c","Type":"ContainerDied","Data":"af8a7f806712c0b7c8837cf11a754c173a80d2a665f3846c509208d91ecb763f"} Jan 21 07:15:21 crc kubenswrapper[4893]: I0121 07:15:21.433546 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8a7f806712c0b7c8837cf11a754c173a80d2a665f3846c509208d91ecb763f" Jan 21 07:15:21 crc kubenswrapper[4893]: I0121 07:15:21.433613 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9" Jan 21 07:15:22 crc kubenswrapper[4893]: I0121 07:15:22.443931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" event={"ID":"3c023ffb-4503-4997-9fac-84414eb67f2e","Type":"ContainerStarted","Data":"bb8f9dab1035176280bdb6151802f5c293a29f25a7b7f2d716898ffb26ef7711"} Jan 21 07:15:22 crc kubenswrapper[4893]: I0121 07:15:22.444470 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:15:22 crc kubenswrapper[4893]: I0121 07:15:22.613794 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" podStartSLOduration=4.503487108 podStartE2EDuration="47.613766983s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.670248246 +0000 UTC m=+1219.900594148" lastFinishedPulling="2026-01-21 07:15:21.780528121 +0000 UTC m=+1263.010874023" observedRunningTime="2026-01-21 07:15:22.464343226 +0000 UTC m=+1263.694689138" watchObservedRunningTime="2026-01-21 07:15:22.613766983 +0000 UTC m=+1263.844112905" Jan 21 07:15:23 crc kubenswrapper[4893]: I0121 07:15:23.471488 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" event={"ID":"ec3cd342-ebee-4689-a339-72ca3fd65506","Type":"ContainerStarted","Data":"5bdad7230b3c9b3739db3c4716e82997add904736d079db7ac974c4fc533ef4b"} Jan 21 07:15:23 crc kubenswrapper[4893]: I0121 07:15:23.472204 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:15:23 crc kubenswrapper[4893]: I0121 07:15:23.496472 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" podStartSLOduration=4.495546752 podStartE2EDuration="48.496443036s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:37.705737887 +0000 UTC m=+1218.936083789" lastFinishedPulling="2026-01-21 07:15:21.706634171 +0000 UTC m=+1262.936980073" observedRunningTime="2026-01-21 07:15:23.487742077 +0000 UTC m=+1264.718087999" watchObservedRunningTime="2026-01-21 07:15:23.496443036 +0000 UTC m=+1264.726788958" Jan 21 07:15:24 crc kubenswrapper[4893]: E0121 07:15:24.592947 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podUID="4b0f2392-37e2-447f-b542-e85bf4af7af9" Jan 21 07:15:25 crc kubenswrapper[4893]: I0121 07:15:25.814582 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-b5mdw" Jan 21 07:15:25 crc kubenswrapper[4893]: I0121 07:15:25.862543 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-m9lt8" Jan 21 07:15:25 crc kubenswrapper[4893]: I0121 07:15:25.920252 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kjsg4" Jan 21 07:15:25 crc kubenswrapper[4893]: I0121 07:15:25.950767 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tcgf7" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.071111 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-txncj" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.075561 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-6w85k" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.287208 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-j5c58" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.288247 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-f2jht" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.313316 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-frxpc" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.416549 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.419639 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-bmm9s" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.420094 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6gpxx" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.694536 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-9hqln" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.694600 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v98wk" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.929463 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-6ppkr" Jan 21 07:15:26 crc kubenswrapper[4893]: I0121 07:15:26.939477 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-ccg72" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.510617 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" event={"ID":"77cb4b5b-8911-40eb-9a0a-066503abf27f","Type":"ContainerStarted","Data":"2b1941fe4fe29c707f9a0a038649523c32528f474aee1255fe26ceff5e0b2fc7"} Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.510851 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.513136 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" event={"ID":"bac4cdab-0839-4940-9a12-bb933e88a1da","Type":"ContainerStarted","Data":"216e994b552c4c001094f4b0eab210a8de11a6d381104854d4107f68e368872a"} Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.515119 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" event={"ID":"4142220f-0688-47a2-9bec-d655f97fe3c6","Type":"ContainerStarted","Data":"617a8883ba7b4faf60b4c6a6a8d2441db9b4b0fa9c7ccdf3c12839e2a6efa5ed"} Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.515217 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.516973 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" event={"ID":"c5280c4a-bab8-4a47-8fb4-91aab130cd63","Type":"ContainerStarted","Data":"c95cc681b3feb7f52164f9552ea44cbdfcf3fd1e699b77bda1d8de06b9e30859"} Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.517165 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.567758 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw22v" podStartSLOduration=3.596625117 podStartE2EDuration="51.567728102s" podCreationTimestamp="2026-01-21 07:14:36 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.863051501 +0000 UTC m=+1220.093397403" lastFinishedPulling="2026-01-21 07:15:26.834154476 +0000 UTC m=+1268.064500388" observedRunningTime="2026-01-21 07:15:27.560339561 +0000 UTC m=+1268.790685463" watchObservedRunningTime="2026-01-21 07:15:27.567728102 +0000 UTC m=+1268.798074004" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.568653 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" podStartSLOduration=4.168958905 podStartE2EDuration="52.568642848s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.43293756 +0000 UTC m=+1219.663283462" lastFinishedPulling="2026-01-21 07:15:26.832621503 +0000 UTC m=+1268.062967405" observedRunningTime="2026-01-21 07:15:27.543944923 +0000 UTC m=+1268.774290825" watchObservedRunningTime="2026-01-21 07:15:27.568642848 +0000 UTC m=+1268.798988760" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.590788 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" podStartSLOduration=41.125519976 podStartE2EDuration="52.59076684s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:15:15.367331347 +0000 UTC m=+1256.597677249" lastFinishedPulling="2026-01-21 07:15:26.832578211 +0000 UTC m=+1268.062924113" observedRunningTime="2026-01-21 07:15:27.585902401 +0000 UTC m=+1268.816248303" watchObservedRunningTime="2026-01-21 07:15:27.59076684 +0000 UTC m=+1268.821112752" Jan 21 07:15:27 crc kubenswrapper[4893]: I0121 07:15:27.611760 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" podStartSLOduration=41.245185223 podStartE2EDuration="52.611743859s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:15:15.465233173 +0000 UTC m=+1256.695579065" lastFinishedPulling="2026-01-21 07:15:26.831791809 +0000 UTC m=+1268.062137701" observedRunningTime="2026-01-21 07:15:27.610362229 +0000 UTC m=+1268.840708131" watchObservedRunningTime="2026-01-21 07:15:27.611743859 +0000 UTC m=+1268.842089761" Jan 21 07:15:28 crc kubenswrapper[4893]: I0121 07:15:28.794397 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-2dz2q" Jan 21 07:15:35 crc kubenswrapper[4893]: I0121 07:15:35.831342 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-d8f8v" Jan 21 07:15:35 crc kubenswrapper[4893]: I0121 07:15:35.901471 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-hddtb" Jan 21 07:15:36 crc kubenswrapper[4893]: I0121 07:15:36.287407 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-htcd2" Jan 21 07:15:37 crc kubenswrapper[4893]: I0121 07:15:37.842174 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-xjlf7" Jan 21 07:15:38 crc kubenswrapper[4893]: I0121 07:15:38.410574 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986d69gjw" Jan 21 07:15:40 crc kubenswrapper[4893]: I0121 07:15:40.751038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" event={"ID":"4b0f2392-37e2-447f-b542-e85bf4af7af9","Type":"ContainerStarted","Data":"9f1e9cd0e55539c4e2545d0980c9f3afb363af95f2d9bd08f832f70a56c3bcf9"} Jan 21 07:15:40 crc kubenswrapper[4893]: I0121 07:15:40.752529 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:15:40 crc kubenswrapper[4893]: I0121 07:15:40.766735 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" podStartSLOduration=4.626997219 podStartE2EDuration="1m5.766690072s" podCreationTimestamp="2026-01-21 07:14:35 +0000 UTC" firstStartedPulling="2026-01-21 07:14:38.887355915 +0000 UTC m=+1220.117701817" lastFinishedPulling="2026-01-21 07:15:40.027048768 +0000 UTC m=+1281.257394670" observedRunningTime="2026-01-21 07:15:40.766093425 +0000 UTC m=+1281.996439327" watchObservedRunningTime="2026-01-21 07:15:40.766690072 +0000 UTC m=+1281.997035974" Jan 21 07:15:46 crc kubenswrapper[4893]: I0121 07:15:46.290073 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g6gf8" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.046117 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:08 crc kubenswrapper[4893]: E0121 07:16:08.049145 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a75791-1765-4b28-81d7-9baddda40b7c" containerName="collect-profiles" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.049178 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a75791-1765-4b28-81d7-9baddda40b7c" containerName="collect-profiles" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.049332 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a75791-1765-4b28-81d7-9baddda40b7c" containerName="collect-profiles" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.050162 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.075340 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p7vkl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.075655 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.075880 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.076056 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.098276 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.191854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvwj\" (UniqueName: \"kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.191983 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.219366 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.220769 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.222992 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.238808 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.293183 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.293268 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvwj\" (UniqueName: \"kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.294396 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.316162 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvwj\" (UniqueName: \"kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj\") pod \"dnsmasq-dns-84bb9d8bd9-hcpfl\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.395057 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.395160 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.395186 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2q76\" (UniqueName: \"kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.411773 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.496750 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.496825 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2q76\" (UniqueName: \"kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.496852 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.497831 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.497837 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.535452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2q76\" (UniqueName: \"kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76\") pod \"dnsmasq-dns-5f854695bc-zmlnk\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.538035 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.838962 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.857894 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.863135 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.881986 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:08 crc kubenswrapper[4893]: W0121 07:16:08.949815 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod576ba015_bb6a_4108_8e1b_4f9cbd0c4d9a.slice/crio-aa0290d9d01d99d2a170dbc14256f0e6b75695111499ec3af748b9085f7bd0dd WatchSource:0}: Error finding container aa0290d9d01d99d2a170dbc14256f0e6b75695111499ec3af748b9085f7bd0dd: Status 404 returned error can't find the container with id aa0290d9d01d99d2a170dbc14256f0e6b75695111499ec3af748b9085f7bd0dd Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.960117 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:08 crc kubenswrapper[4893]: I0121 07:16:08.991475 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" event={"ID":"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a","Type":"ContainerStarted","Data":"aa0290d9d01d99d2a170dbc14256f0e6b75695111499ec3af748b9085f7bd0dd"} Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.009626 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.009950 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.010064 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klknl\" (UniqueName: \"kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.027941 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:09 crc kubenswrapper[4893]: W0121 07:16:09.044778 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d093843_0a0c_4545_b04f_c473795b0ccd.slice/crio-69d87d9907cd6b9871c4ced60ce830441fa257f28c4b3754f01f257371ad05eb WatchSource:0}: Error finding container 69d87d9907cd6b9871c4ced60ce830441fa257f28c4b3754f01f257371ad05eb: Status 404 returned error can't find the container with id 69d87d9907cd6b9871c4ced60ce830441fa257f28c4b3754f01f257371ad05eb Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.112029 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klknl\" (UniqueName: \"kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.112142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.112182 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.113152 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.113161 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.147465 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klknl\" (UniqueName: \"kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl\") pod \"dnsmasq-dns-c7cbb8f79-z8g6b\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.218984 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.273995 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.304304 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.305584 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.339813 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.416323 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.416381 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zn5x\" (UniqueName: \"kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.416430 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.519433 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.519512 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zn5x\" (UniqueName: \"kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.519580 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.520608 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.521943 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.552579 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zn5x\" (UniqueName: \"kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x\") pod \"dnsmasq-dns-95f5f6995-njqxl\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.663888 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:16:09 crc kubenswrapper[4893]: I0121 07:16:09.745248 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.000873 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" event={"ID":"0d093843-0a0c-4545-b04f-c473795b0ccd","Type":"ContainerStarted","Data":"69d87d9907cd6b9871c4ced60ce830441fa257f28c4b3754f01f257371ad05eb"} Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.002933 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" event={"ID":"9ef867f1-d57c-4b79-ba37-6b7714d23e60","Type":"ContainerStarted","Data":"6f1d071321a5ba71fa3ab8992c4de6b9c7d9e6ae93a5c39dc06012c89dc9d28a"} Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.061263 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.062577 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.067029 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.067241 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.067349 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.067513 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-62hrv" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.068365 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.068498 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.069646 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.085077 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.143249 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:16:10 crc kubenswrapper[4893]: W0121 07:16:10.155152 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b2f2e99_98c7_4e63_9349_c10a839f3310.slice/crio-3d1888b71a21f9cd19c5b0265ba26404124c11e89f3564f0db6a862cc9c3be68 WatchSource:0}: Error finding container 3d1888b71a21f9cd19c5b0265ba26404124c11e89f3564f0db6a862cc9c3be68: Status 404 returned error can't find the container with id 3d1888b71a21f9cd19c5b0265ba26404124c11e89f3564f0db6a862cc9c3be68 Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232520 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5fqr\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232589 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232819 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232862 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232930 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.232949 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.233156 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.233243 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.233291 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.233329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.233348 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335361 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335387 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5fqr\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335469 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335538 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335555 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335597 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335632 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.335699 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.336725 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.341929 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.342567 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.342582 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.342839 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.342958 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.345924 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.347194 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.349001 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.353420 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.362901 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5fqr\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.372613 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.418145 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.437267 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.438768 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.454963 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.455192 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.455313 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.455432 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xc8ws" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.455570 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.456402 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.456603 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.463933 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538508 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538875 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538914 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538940 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsn4l\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538967 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.538990 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.539141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.539306 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.539827 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.540016 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.540159 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642327 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642398 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsn4l\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642428 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642456 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642491 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642536 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642559 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642658 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642725 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.642746 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.643707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.644073 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.646875 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.647126 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.647380 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.648035 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.650472 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.650899 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.651157 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.660895 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.672238 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsn4l\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.709980 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " pod="openstack/rabbitmq-server-0" Jan 21 07:16:10 crc kubenswrapper[4893]: I0121 07:16:10.849858 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.054332 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" event={"ID":"1b2f2e99-98c7-4e63-9349-c10a839f3310","Type":"ContainerStarted","Data":"3d1888b71a21f9cd19c5b0265ba26404124c11e89f3564f0db6a862cc9c3be68"} Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.058802 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.419014 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.552709 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.575339 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.577995 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-59nv7" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.578839 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.579053 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.579336 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.579537 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.588888 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664651 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664719 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96td\" (UniqueName: \"kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664753 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664770 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664791 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664847 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664894 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.664944 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767480 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767543 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96td\" (UniqueName: \"kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767577 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767603 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767643 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767731 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.767826 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.769973 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.770741 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.773260 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.773508 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.773789 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.912428 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.916860 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:11 crc kubenswrapper[4893]: I0121 07:16:11.948456 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96td\" (UniqueName: \"kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.010193 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " pod="openstack/openstack-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.066520 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerStarted","Data":"767d15ff2a6bea44bf05d493a5b3ec1389e577bcc68f4aa1efa04d46a7167d21"} Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.067956 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerStarted","Data":"4e6d5b4ed0150b0ebdcc26314171f7ee394430adfee148a08e44670d0b079434"} Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.198664 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.756225 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.757755 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.770417 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.770763 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.771339 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zg2gp" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.771801 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.790941 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.810871 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948060 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948395 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdnhx\" (UniqueName: \"kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948429 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948450 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948524 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948544 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948572 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:12 crc kubenswrapper[4893]: I0121 07:16:12.948586 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050251 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050290 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050371 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050415 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdnhx\" (UniqueName: \"kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050445 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.050473 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.051177 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.051432 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.051449 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.052431 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.061231 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.061521 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.064517 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.067333 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdnhx\" (UniqueName: \"kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.091867 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerStarted","Data":"e15af96d6432f439ab2de49d8285b5ce0dd190b61240020f5e6f26b873a11a29"} Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.101617 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.211204 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.212683 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.214629 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.214816 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9m8np" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.215012 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.224384 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.355951 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.356006 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pbl\" (UniqueName: \"kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.356082 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.358139 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.358253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.416123 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.464495 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.464549 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68pbl\" (UniqueName: \"kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.464617 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.464692 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.464735 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.465318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.470937 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.487113 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.489145 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.493842 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68pbl\" (UniqueName: \"kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl\") pod \"memcached-0\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " pod="openstack/memcached-0" Jan 21 07:16:13 crc kubenswrapper[4893]: I0121 07:16:13.550930 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 07:16:14 crc kubenswrapper[4893]: I0121 07:16:14.332431 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:16:14 crc kubenswrapper[4893]: I0121 07:16:14.688061 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 07:16:14 crc kubenswrapper[4893]: W0121 07:16:14.709075 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod520610a0_97e8_45ed_8020_952d9d4501b1.slice/crio-b23e82f87a93924d5586ecb18b1e0c8a8d70b2c3e3672f1e477dd3c3a082d93c WatchSource:0}: Error finding container b23e82f87a93924d5586ecb18b1e0c8a8d70b2c3e3672f1e477dd3c3a082d93c: Status 404 returned error can't find the container with id b23e82f87a93924d5586ecb18b1e0c8a8d70b2c3e3672f1e477dd3c3a082d93c Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.072915 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.075197 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.079298 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mcmr7" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.099363 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.136911 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhz4\" (UniqueName: \"kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4\") pod \"kube-state-metrics-0\" (UID: \"299c3f15-e0c0-4017-ac39-e3a2f0764928\") " pod="openstack/kube-state-metrics-0" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.137537 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"520610a0-97e8-45ed-8020-952d9d4501b1","Type":"ContainerStarted","Data":"b23e82f87a93924d5586ecb18b1e0c8a8d70b2c3e3672f1e477dd3c3a082d93c"} Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.141906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerStarted","Data":"f6fb7f89b3c38c4706ce0998db7d9049a3566edf6e0b988b061138a8bc4f6cdf"} Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.239631 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwhz4\" (UniqueName: \"kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4\") pod \"kube-state-metrics-0\" (UID: \"299c3f15-e0c0-4017-ac39-e3a2f0764928\") " pod="openstack/kube-state-metrics-0" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.275264 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwhz4\" (UniqueName: \"kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4\") pod \"kube-state-metrics-0\" (UID: \"299c3f15-e0c0-4017-ac39-e3a2f0764928\") " pod="openstack/kube-state-metrics-0" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.407920 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:16:15 crc kubenswrapper[4893]: I0121 07:16:15.967856 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:16:16 crc kubenswrapper[4893]: W0121 07:16:16.038381 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod299c3f15_e0c0_4017_ac39_e3a2f0764928.slice/crio-4ed8cc223d3c20222fee173a5f5162052c554b055b1d4993dc6ae93021ae96bf WatchSource:0}: Error finding container 4ed8cc223d3c20222fee173a5f5162052c554b055b1d4993dc6ae93021ae96bf: Status 404 returned error can't find the container with id 4ed8cc223d3c20222fee173a5f5162052c554b055b1d4993dc6ae93021ae96bf Jan 21 07:16:16 crc kubenswrapper[4893]: I0121 07:16:16.162418 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"299c3f15-e0c0-4017-ac39-e3a2f0764928","Type":"ContainerStarted","Data":"4ed8cc223d3c20222fee173a5f5162052c554b055b1d4993dc6ae93021ae96bf"} Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.014866 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.016363 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.020206 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.021144 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.021478 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.021688 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hk82r" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.028795 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.037701 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186062 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186227 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186388 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr4r\" (UniqueName: \"kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186513 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.186557 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290208 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290279 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290347 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290370 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290411 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290446 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290473 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.290494 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rr4r\" (UniqueName: \"kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.292161 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.292533 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.292985 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.298261 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.304571 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.305213 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.310263 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rr4r\" (UniqueName: \"kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.323174 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.324202 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.339274 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.977756 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.979348 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.987268 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.987533 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-b9pdw" Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.994800 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:16:19 crc kubenswrapper[4893]: I0121 07:16:19.997818 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.038048 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.047829 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.048324 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111102 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111187 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111215 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111275 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111310 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111342 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.111372 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrk5\" (UniqueName: \"kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.212804 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213160 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrk5\" (UniqueName: \"kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213234 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213252 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213274 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213294 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7zx\" (UniqueName: \"kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213314 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213342 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213363 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213406 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213452 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.213478 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.214638 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.214801 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.216810 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.216977 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.233597 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.238249 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrk5\" (UniqueName: \"kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.249246 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs\") pod \"ovn-controller-dfvzw\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314553 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314662 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314701 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314763 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314823 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.314845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7zx\" (UniqueName: \"kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.317317 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.317927 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.318640 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.318782 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.318827 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.321782 4893 container_manager_linux.go:630] "Failed to ensure state" containerName="/system.slice" err="failed to move PID 41579 into the system container \"/system.slice\": " Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.346610 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7zx\" (UniqueName: \"kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx\") pod \"ovn-controller-ovs-zvt96\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.366962 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-b9pdw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.371016 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.432511 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.877497 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:16:20 crc kubenswrapper[4893]: I0121 07:16:20.909085 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.159910 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:16:21 crc kubenswrapper[4893]: W0121 07:16:21.166398 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d5f974_5570_4407_8dbe_7471ae98fd50.slice/crio-c8e7d2ddaea979663df553c4b5e9392d4ab7f1a7c28eb6b598fb8c5772fbb88f WatchSource:0}: Error finding container c8e7d2ddaea979663df553c4b5e9392d4ab7f1a7c28eb6b598fb8c5772fbb88f: Status 404 returned error can't find the container with id c8e7d2ddaea979663df553c4b5e9392d4ab7f1a7c28eb6b598fb8c5772fbb88f Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.279906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerStarted","Data":"c8e7d2ddaea979663df553c4b5e9392d4ab7f1a7c28eb6b598fb8c5772fbb88f"} Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.283074 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerStarted","Data":"d7811524884608c187772891582746d02abbb30dda996fa538e08956e33be2a8"} Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.284845 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw" event={"ID":"80680178-a1d2-4135-8949-881dc7ac92ea","Type":"ContainerStarted","Data":"e2723075f52f2c3d1aca94260b3ed49da8105c5c53b5c3f888d2a7656cfe3ba0"} Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.520317 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.521749 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.524416 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.541923 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.541971 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.542034 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.542075 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4shq\" (UniqueName: \"kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.542102 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.542123 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.543196 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.643967 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644043 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4shq\" (UniqueName: \"kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644081 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644102 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644148 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644173 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644343 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.644420 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.645009 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.650159 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.662433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.665280 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4shq\" (UniqueName: \"kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq\") pod \"ovn-controller-metrics-7s4fm\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.849425 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.952713 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.986310 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.988044 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:21 crc kubenswrapper[4893]: I0121 07:16:21.990164 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.001826 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.052616 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.052741 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.052824 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8st\" (UniqueName: \"kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.052879 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.196810 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8st\" (UniqueName: \"kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.197652 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.198924 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.202110 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.202940 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.203135 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.203977 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.220132 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8st\" (UniqueName: \"kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st\") pod \"dnsmasq-dns-7878659675-cxgvk\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.309203 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.660618 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.666334 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.671157 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.671451 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.671591 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qn86c" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.671789 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.679536 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.814634 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.814721 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.814907 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.814960 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.815000 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jplt\" (UniqueName: \"kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.815269 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.815427 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:22 crc kubenswrapper[4893]: I0121 07:16:22.815527 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226337 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226395 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226437 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226468 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226503 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226527 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jplt\" (UniqueName: \"kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226571 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.226606 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.227649 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.231132 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.231380 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.238306 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.239208 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.242351 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.257025 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.261853 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.353189 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jplt\" (UniqueName: \"kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt\") pod \"ovsdbserver-sb-0\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:23 crc kubenswrapper[4893]: I0121 07:16:23.587500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 07:16:33 crc kubenswrapper[4893]: E0121 07:16:33.991614 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 21 07:16:33 crc kubenswrapper[4893]: E0121 07:16:33.992520 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nfdh5c6hfbh5b5h65ch59h668h7ch695h5bch5d6h78h5dhf8h577h5d8h54fh548h587h678h558h58bhdbhbfh5ffh659h58bh79h556h669h5f8h8dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68pbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(520610a0-97e8-45ed-8020-952d9d4501b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:33 crc kubenswrapper[4893]: E0121 07:16:33.993814 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" Jan 21 07:16:34 crc kubenswrapper[4893]: E0121 07:16:34.451199 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" Jan 21 07:16:34 crc kubenswrapper[4893]: E0121 07:16:34.885163 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 07:16:34 crc kubenswrapper[4893]: E0121 07:16:34.885394 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsn4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(89f70f50-3d66-4917-bfe2-1084a55e4eb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:34 crc kubenswrapper[4893]: E0121 07:16:34.886705 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" Jan 21 07:16:35 crc kubenswrapper[4893]: E0121 07:16:35.460117 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" Jan 21 07:16:37 crc kubenswrapper[4893]: E0121 07:16:37.016730 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 07:16:37 crc kubenswrapper[4893]: E0121 07:16:37.016923 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5fqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(fdb40d40-7926-424a-810d-3b6f77e1022f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:37 crc kubenswrapper[4893]: E0121 07:16:37.019971 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" Jan 21 07:16:37 crc kubenswrapper[4893]: E0121 07:16:37.474949 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.207078 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.208059 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(5b37865c-22cd-4288-b47b-ef9ef1f33646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.209315 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.231810 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.233125 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q96td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(5cc7c949-b993-484e-8e07-778a72743679): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.234327 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="5cc7c949-b993-484e-8e07-778a72743679" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.536857 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="5cc7c949-b993-484e-8e07-778a72743679" Jan 21 07:16:44 crc kubenswrapper[4893]: E0121 07:16:44.537491 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" Jan 21 07:16:45 crc kubenswrapper[4893]: E0121 07:16:45.563780 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3" Jan 21 07:16:45 crc kubenswrapper[4893]: E0121 07:16:45.564795 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6dh8h5b5h5c8h5bch7fh67ch656h578h594h675h549h57ch574h646h599h6dh5f7h5d7h96h577h55bh698h57fh598hd4hffh56fhbh66h559h56fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl7zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-zvt96_openstack(78d5f974-5570-4407-8dbe-7471ae98fd50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:45 crc kubenswrapper[4893]: E0121 07:16:45.566602 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" Jan 21 07:16:46 crc kubenswrapper[4893]: E0121 07:16:46.553007 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3\\\"\"" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" Jan 21 07:16:53 crc kubenswrapper[4893]: I0121 07:16:53.681555 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.256426 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.257032 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-njqxl_openstack(1b2f2e99-98c7-4e63-9349-c10a839f3310): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.258266 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" podUID="1b2f2e99-98c7-4e63-9349-c10a839f3310" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.350651 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.350990 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klknl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-z8g6b_openstack(9ef867f1-d57c-4b79-ba37-6b7714d23e60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.352799 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" podUID="9ef867f1-d57c-4b79-ba37-6b7714d23e60" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.519867 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.520123 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6dh8h5b5h5c8h5bch7fh67ch656h578h594h675h549h57ch574h646h599h6dh5f7h5d7h96h577h55bh698h57fh598hd4hffh56fhbh66h559h56fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rrk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-dfvzw_openstack(80680178-a1d2-4135-8949-881dc7ac92ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.521352 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.771633 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" podUID="1b2f2e99-98c7-4e63-9349-c10a839f3310" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.774108 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de\\\"\"" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.953931 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.954216 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxvwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-hcpfl_openstack(576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.955367 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" podUID="576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.969723 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.969902 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2q76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-zmlnk_openstack(0d093843-0a0c-4545-b04f-c473795b0ccd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:16:55 crc kubenswrapper[4893]: E0121 07:16:55.972837 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" podUID="0d093843-0a0c-4545-b04f-c473795b0ccd" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.038742 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.086937 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.168321 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:56 crc kubenswrapper[4893]: W0121 07:16:56.192043 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a81ba3d_1493_421c_b0f8_40a16ed8cec8.slice/crio-a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4 WatchSource:0}: Error finding container a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4: Status 404 returned error can't find the container with id a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4 Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.275912 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klknl\" (UniqueName: \"kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl\") pod \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.276046 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config\") pod \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.276140 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc\") pod \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\" (UID: \"9ef867f1-d57c-4b79-ba37-6b7714d23e60\") " Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.276698 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ef867f1-d57c-4b79-ba37-6b7714d23e60" (UID: "9ef867f1-d57c-4b79-ba37-6b7714d23e60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.276657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config" (OuterVolumeSpecName: "config") pod "9ef867f1-d57c-4b79-ba37-6b7714d23e60" (UID: "9ef867f1-d57c-4b79-ba37-6b7714d23e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.280465 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl" (OuterVolumeSpecName: "kube-api-access-klknl") pod "9ef867f1-d57c-4b79-ba37-6b7714d23e60" (UID: "9ef867f1-d57c-4b79-ba37-6b7714d23e60"). InnerVolumeSpecName "kube-api-access-klknl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:16:56 crc kubenswrapper[4893]: E0121 07:16:56.313343 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 21 07:16:56 crc kubenswrapper[4893]: E0121 07:16:56.313412 4893 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 21 07:16:56 crc kubenswrapper[4893]: E0121 07:16:56.313590 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwhz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(299c3f15-e0c0-4017-ac39-e3a2f0764928): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 21 07:16:56 crc kubenswrapper[4893]: E0121 07:16:56.314998 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.377729 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.377762 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klknl\" (UniqueName: \"kubernetes.io/projected/9ef867f1-d57c-4b79-ba37-6b7714d23e60-kube-api-access-klknl\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.377773 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef867f1-d57c-4b79-ba37-6b7714d23e60-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.783027 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s4fm" event={"ID":"12c05c26-e0c2-4516-9fa6-8dc8779d1430","Type":"ContainerStarted","Data":"a0c883ea147207c7acdcc368c0b922ec3e2d5ab7263c63e60558b14755d7b918"} Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.785453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cxgvk" event={"ID":"07b9c0c4-505d-4af3-ac57-3a379550f85f","Type":"ContainerStarted","Data":"d1157acd868aeb7fb1e5e4c7271a65da4af57536caa5ec38d5b5f00dc5a8ffa8"} Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.787415 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" event={"ID":"9ef867f1-d57c-4b79-ba37-6b7714d23e60","Type":"ContainerDied","Data":"6f1d071321a5ba71fa3ab8992c4de6b9c7d9e6ae93a5c39dc06012c89dc9d28a"} Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.787520 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-z8g6b" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.790280 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerStarted","Data":"a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4"} Jan 21 07:16:56 crc kubenswrapper[4893]: E0121 07:16:56.792348 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.980094 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:56 crc kubenswrapper[4893]: I0121 07:16:56.987046 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-z8g6b"] Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.174728 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.187494 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.231773 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2q76\" (UniqueName: \"kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76\") pod \"0d093843-0a0c-4545-b04f-c473795b0ccd\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.233550 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxvwj\" (UniqueName: \"kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj\") pod \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.233591 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc\") pod \"0d093843-0a0c-4545-b04f-c473795b0ccd\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.234153 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d093843-0a0c-4545-b04f-c473795b0ccd" (UID: "0d093843-0a0c-4545-b04f-c473795b0ccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.234295 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config\") pod \"0d093843-0a0c-4545-b04f-c473795b0ccd\" (UID: \"0d093843-0a0c-4545-b04f-c473795b0ccd\") " Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.234362 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config\") pod \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\" (UID: \"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a\") " Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.235478 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config" (OuterVolumeSpecName: "config") pod "0d093843-0a0c-4545-b04f-c473795b0ccd" (UID: "0d093843-0a0c-4545-b04f-c473795b0ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.235614 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config" (OuterVolumeSpecName: "config") pod "576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a" (UID: "576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.235800 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.235820 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d093843-0a0c-4545-b04f-c473795b0ccd-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.235833 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.238790 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76" (OuterVolumeSpecName: "kube-api-access-m2q76") pod "0d093843-0a0c-4545-b04f-c473795b0ccd" (UID: "0d093843-0a0c-4545-b04f-c473795b0ccd"). InnerVolumeSpecName "kube-api-access-m2q76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.239015 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj" (OuterVolumeSpecName: "kube-api-access-hxvwj") pod "576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a" (UID: "576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a"). InnerVolumeSpecName "kube-api-access-hxvwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.337834 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2q76\" (UniqueName: \"kubernetes.io/projected/0d093843-0a0c-4545-b04f-c473795b0ccd-kube-api-access-m2q76\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.337897 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxvwj\" (UniqueName: \"kubernetes.io/projected/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a-kube-api-access-hxvwj\") on node \"crc\" DevicePath \"\"" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.596614 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef867f1-d57c-4b79-ba37-6b7714d23e60" path="/var/lib/kubelet/pods/9ef867f1-d57c-4b79-ba37-6b7714d23e60/volumes" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.798499 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.798488 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-hcpfl" event={"ID":"576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a","Type":"ContainerDied","Data":"aa0290d9d01d99d2a170dbc14256f0e6b75695111499ec3af748b9085f7bd0dd"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.811914 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerStarted","Data":"3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.814234 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" event={"ID":"0d093843-0a0c-4545-b04f-c473795b0ccd","Type":"ContainerDied","Data":"69d87d9907cd6b9871c4ced60ce830441fa257f28c4b3754f01f257371ad05eb"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.814243 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zmlnk" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.816691 4893 generic.go:334] "Generic (PLEG): container finished" podID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerID="cef581f00464d9b6d3724086b93e9a9f075803c47d55f26fd69c7baef820cafa" exitCode=0 Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.816766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cxgvk" event={"ID":"07b9c0c4-505d-4af3-ac57-3a379550f85f","Type":"ContainerDied","Data":"cef581f00464d9b6d3724086b93e9a9f075803c47d55f26fd69c7baef820cafa"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.819225 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerStarted","Data":"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.824198 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerStarted","Data":"e8699a5e2783d56129ce6db61a403cae9a45f49d20cbf1c4d665f290331a8241"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.828224 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerStarted","Data":"af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.831298 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerStarted","Data":"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.848171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"520610a0-97e8-45ed-8020-952d9d4501b1","Type":"ContainerStarted","Data":"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e"} Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.848514 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.904567 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.913522 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zmlnk"] Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.955149 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:57 crc kubenswrapper[4893]: I0121 07:16:57.972248 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-hcpfl"] Jan 21 07:16:58 crc kubenswrapper[4893]: I0121 07:16:58.048899 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.7088583010000002 podStartE2EDuration="45.048856827s" podCreationTimestamp="2026-01-21 07:16:13 +0000 UTC" firstStartedPulling="2026-01-21 07:16:14.745264506 +0000 UTC m=+1315.975610408" lastFinishedPulling="2026-01-21 07:16:56.085263032 +0000 UTC m=+1357.315608934" observedRunningTime="2026-01-21 07:16:58.030784486 +0000 UTC m=+1359.261130388" watchObservedRunningTime="2026-01-21 07:16:58.048856827 +0000 UTC m=+1359.279202729" Jan 21 07:16:58 crc kubenswrapper[4893]: I0121 07:16:58.859885 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerStarted","Data":"be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71"} Jan 21 07:16:58 crc kubenswrapper[4893]: I0121 07:16:58.864106 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cxgvk" event={"ID":"07b9c0c4-505d-4af3-ac57-3a379550f85f","Type":"ContainerStarted","Data":"fd51858a23e46a3f6ec2c98ea063491b155d092cb6edbea569d1d45b5b29c489"} Jan 21 07:16:58 crc kubenswrapper[4893]: I0121 07:16:58.889196 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-cxgvk" podStartSLOduration=37.345760388 podStartE2EDuration="37.889174111s" podCreationTimestamp="2026-01-21 07:16:21 +0000 UTC" firstStartedPulling="2026-01-21 07:16:56.091527443 +0000 UTC m=+1357.321873345" lastFinishedPulling="2026-01-21 07:16:56.634941166 +0000 UTC m=+1357.865287068" observedRunningTime="2026-01-21 07:16:58.881987143 +0000 UTC m=+1360.112333065" watchObservedRunningTime="2026-01-21 07:16:58.889174111 +0000 UTC m=+1360.119520013" Jan 21 07:16:59 crc kubenswrapper[4893]: I0121 07:16:59.600219 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d093843-0a0c-4545-b04f-c473795b0ccd" path="/var/lib/kubelet/pods/0d093843-0a0c-4545-b04f-c473795b0ccd/volumes" Jan 21 07:16:59 crc kubenswrapper[4893]: I0121 07:16:59.600630 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a" path="/var/lib/kubelet/pods/576ba015-bb6a-4108-8e1b-4f9cbd0c4d9a/volumes" Jan 21 07:16:59 crc kubenswrapper[4893]: I0121 07:16:59.870107 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.902533 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s4fm" event={"ID":"12c05c26-e0c2-4516-9fa6-8dc8779d1430","Type":"ContainerStarted","Data":"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3"} Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.904450 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerStarted","Data":"48c749ca430629f3f11cef033f3e9982760ac3bbfd06d3297b7dfe8227939b80"} Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.906348 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerStarted","Data":"891656a23e552f4191c271c0656f4e5f186283f5a0a5cbf39b3a5d9a84777610"} Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.944296 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7s4fm" podStartSLOduration=35.95244793 podStartE2EDuration="39.944271635s" podCreationTimestamp="2026-01-21 07:16:21 +0000 UTC" firstStartedPulling="2026-01-21 07:16:55.909800741 +0000 UTC m=+1357.140146643" lastFinishedPulling="2026-01-21 07:16:59.901624446 +0000 UTC m=+1361.131970348" observedRunningTime="2026-01-21 07:17:00.928538159 +0000 UTC m=+1362.158884061" watchObservedRunningTime="2026-01-21 07:17:00.944271635 +0000 UTC m=+1362.174617537" Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.977153 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=36.268138439 podStartE2EDuration="39.977134586s" podCreationTimestamp="2026-01-21 07:16:21 +0000 UTC" firstStartedPulling="2026-01-21 07:16:56.196249683 +0000 UTC m=+1357.426595585" lastFinishedPulling="2026-01-21 07:16:59.90524583 +0000 UTC m=+1361.135591732" observedRunningTime="2026-01-21 07:17:00.970066952 +0000 UTC m=+1362.200412854" watchObservedRunningTime="2026-01-21 07:17:00.977134586 +0000 UTC m=+1362.207480488" Jan 21 07:17:00 crc kubenswrapper[4893]: I0121 07:17:00.999506 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.01260345 podStartE2EDuration="43.999484074s" podCreationTimestamp="2026-01-21 07:16:17 +0000 UTC" firstStartedPulling="2026-01-21 07:16:20.91504474 +0000 UTC m=+1322.145390642" lastFinishedPulling="2026-01-21 07:16:59.901925364 +0000 UTC m=+1361.132271266" observedRunningTime="2026-01-21 07:17:00.993962084 +0000 UTC m=+1362.224307986" watchObservedRunningTime="2026-01-21 07:17:00.999484074 +0000 UTC m=+1362.229829976" Jan 21 07:17:01 crc kubenswrapper[4893]: E0121 07:17:01.112320 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b37865c_22cd_4288_b47b_ef9ef1f33646.slice/crio-e8699a5e2783d56129ce6db61a403cae9a45f49d20cbf1c4d665f290331a8241.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cc7c949_b993_484e_8e07_778a72743679.slice/crio-3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cc7c949_b993_484e_8e07_778a72743679.slice/crio-conmon-3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.340793 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.408756 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.431008 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.531343 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.535436 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.537733 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.549286 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.723417 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.723493 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlxj\" (UniqueName: \"kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.723573 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.723713 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.723735 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.781364 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.867810 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zn5x\" (UniqueName: \"kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x\") pod \"1b2f2e99-98c7-4e63-9349-c10a839f3310\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.867989 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config\") pod \"1b2f2e99-98c7-4e63-9349-c10a839f3310\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc\") pod \"1b2f2e99-98c7-4e63-9349-c10a839f3310\" (UID: \"1b2f2e99-98c7-4e63-9349-c10a839f3310\") " Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868334 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868393 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlxj\" (UniqueName: \"kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868486 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868507 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868514 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config" (OuterVolumeSpecName: "config") pod "1b2f2e99-98c7-4e63-9349-c10a839f3310" (UID: "1b2f2e99-98c7-4e63-9349-c10a839f3310"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.868646 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b2f2e99-98c7-4e63-9349-c10a839f3310" (UID: "1b2f2e99-98c7-4e63-9349-c10a839f3310"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.869389 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.869442 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.869697 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.869717 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.877916 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x" (OuterVolumeSpecName: "kube-api-access-9zn5x") pod "1b2f2e99-98c7-4e63-9349-c10a839f3310" (UID: "1b2f2e99-98c7-4e63-9349-c10a839f3310"). InnerVolumeSpecName "kube-api-access-9zn5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.889001 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlxj\" (UniqueName: \"kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj\") pod \"dnsmasq-dns-586b989cdc-5x46j\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.915483 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerStarted","Data":"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6"} Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.917262 4893 generic.go:334] "Generic (PLEG): container finished" podID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerID="e8699a5e2783d56129ce6db61a403cae9a45f49d20cbf1c4d665f290331a8241" exitCode=0 Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.917354 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerDied","Data":"e8699a5e2783d56129ce6db61a403cae9a45f49d20cbf1c4d665f290331a8241"} Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.919467 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" event={"ID":"1b2f2e99-98c7-4e63-9349-c10a839f3310","Type":"ContainerDied","Data":"3d1888b71a21f9cd19c5b0265ba26404124c11e89f3564f0db6a862cc9c3be68"} Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.919486 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-njqxl" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.921169 4893 generic.go:334] "Generic (PLEG): container finished" podID="5cc7c949-b993-484e-8e07-778a72743679" containerID="3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc" exitCode=0 Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.921292 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerDied","Data":"3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc"} Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.921424 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.971508 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.971544 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b2f2e99-98c7-4e63-9349-c10a839f3310-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:01 crc kubenswrapper[4893]: I0121 07:17:01.971561 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zn5x\" (UniqueName: \"kubernetes.io/projected/1b2f2e99-98c7-4e63-9349-c10a839f3310-kube-api-access-9zn5x\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.038882 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.067112 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-njqxl"] Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.160711 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.588523 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.626407 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.649233 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:02 crc kubenswrapper[4893]: W0121 07:17:02.655760 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a744009_f5d7_4d59_bb5b_1668a715e9d0.slice/crio-5c2f62b9131b04c0e9d4b812617118c205c89e10e017edcc88dc7b7acc5a4a7d WatchSource:0}: Error finding container 5c2f62b9131b04c0e9d4b812617118c205c89e10e017edcc88dc7b7acc5a4a7d: Status 404 returned error can't find the container with id 5c2f62b9131b04c0e9d4b812617118c205c89e10e017edcc88dc7b7acc5a4a7d Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.939943 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerStarted","Data":"aee8a6ea9a77f904909aaaa7e5b406eb695daf2df6664ab2f71b0577e981db2c"} Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.943079 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" event={"ID":"8a744009-f5d7-4d59-bb5b-1668a715e9d0","Type":"ContainerStarted","Data":"5c2f62b9131b04c0e9d4b812617118c205c89e10e017edcc88dc7b7acc5a4a7d"} Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.946215 4893 generic.go:334] "Generic (PLEG): container finished" podID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerID="0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6" exitCode=0 Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.946369 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerDied","Data":"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6"} Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.951880 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerStarted","Data":"7c6d4673c3549715ec53ab38c378a4c139ad12463137e1030d564c833b09d3f2"} Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.953117 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 07:17:02 crc kubenswrapper[4893]: I0121 07:17:02.976213 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371983.878586 podStartE2EDuration="52.976190646s" podCreationTimestamp="2026-01-21 07:16:10 +0000 UTC" firstStartedPulling="2026-01-21 07:16:12.84196567 +0000 UTC m=+1314.072311572" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:02.973466577 +0000 UTC m=+1364.203812519" watchObservedRunningTime="2026-01-21 07:17:02.976190646 +0000 UTC m=+1364.206536548" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.018625 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.021198 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.042851 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.037587374 podStartE2EDuration="52.042823636s" podCreationTimestamp="2026-01-21 07:16:11 +0000 UTC" firstStartedPulling="2026-01-21 07:16:14.350151841 +0000 UTC m=+1315.580497743" lastFinishedPulling="2026-01-21 07:16:56.355388093 +0000 UTC m=+1357.585734005" observedRunningTime="2026-01-21 07:17:03.027378169 +0000 UTC m=+1364.257724081" watchObservedRunningTime="2026-01-21 07:17:03.042823636 +0000 UTC m=+1364.273169548" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.435152 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.435202 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.567186 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.604229 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2f2e99-98c7-4e63-9349-c10a839f3310" path="/var/lib/kubelet/pods/1b2f2e99-98c7-4e63-9349-c10a839f3310/volumes" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.717348 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.719820 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.734764 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.734979 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.735202 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-n4l49" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.735907 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744373 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744758 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744810 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744844 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744896 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744938 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfxj\" (UniqueName: \"kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.744987 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.757483 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846751 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846810 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfxj\" (UniqueName: \"kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846852 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846897 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846926 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.846989 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.847875 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.847883 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.848482 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.865874 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.865994 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.866928 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.869889 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfxj\" (UniqueName: \"kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj\") pod \"ovn-northd-0\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " pod="openstack/ovn-northd-0" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.960408 4893 generic.go:334] "Generic (PLEG): container finished" podID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerID="af11d584f906af1277ca2b44d46a2809ddd4d0cdb7af9c129067ab76ba53cae3" exitCode=0 Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.960530 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" event={"ID":"8a744009-f5d7-4d59-bb5b-1668a715e9d0","Type":"ContainerDied","Data":"af11d584f906af1277ca2b44d46a2809ddd4d0cdb7af9c129067ab76ba53cae3"} Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.964154 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerStarted","Data":"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351"} Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.964202 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerStarted","Data":"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3"} Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.964828 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:17:03 crc kubenswrapper[4893]: I0121 07:17:03.964933 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:17:04 crc kubenswrapper[4893]: I0121 07:17:04.009978 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zvt96" podStartSLOduration=4.902342343 podStartE2EDuration="45.009957318s" podCreationTimestamp="2026-01-21 07:16:19 +0000 UTC" firstStartedPulling="2026-01-21 07:16:21.171821256 +0000 UTC m=+1322.402167158" lastFinishedPulling="2026-01-21 07:17:01.279436231 +0000 UTC m=+1362.509782133" observedRunningTime="2026-01-21 07:17:04.008815675 +0000 UTC m=+1365.239161607" watchObservedRunningTime="2026-01-21 07:17:04.009957318 +0000 UTC m=+1365.240303220" Jan 21 07:17:04 crc kubenswrapper[4893]: I0121 07:17:04.054839 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 07:17:04 crc kubenswrapper[4893]: I0121 07:17:04.283590 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.118578 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerStarted","Data":"7fb47672401c812d658bd89f89fc38d3f266874a82b2c77f5e931c9c3efb910b"} Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.120812 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" event={"ID":"8a744009-f5d7-4d59-bb5b-1668a715e9d0","Type":"ContainerStarted","Data":"65324b6e03f7d7166555441ac9620268b38765894af89d5e281b6859ee3047ca"} Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.137381 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" podStartSLOduration=4.137344591 podStartE2EDuration="4.137344591s" podCreationTimestamp="2026-01-21 07:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:05.137141715 +0000 UTC m=+1366.367487617" watchObservedRunningTime="2026-01-21 07:17:05.137344591 +0000 UTC m=+1366.367690493" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.453012 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.453457 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-cxgvk" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="dnsmasq-dns" containerID="cri-o://fd51858a23e46a3f6ec2c98ea063491b155d092cb6edbea569d1d45b5b29c489" gracePeriod=10 Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.455342 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.511995 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.518037 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.553845 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.619435 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.619492 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.619517 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc446\" (UniqueName: \"kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.619600 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.619667 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.721714 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.721808 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.721838 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.721868 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc446\" (UniqueName: \"kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.721942 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.723162 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.723254 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.723382 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.723456 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.747757 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc446\" (UniqueName: \"kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446\") pod \"dnsmasq-dns-67fdf7998c-6rcdb\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:05 crc kubenswrapper[4893]: I0121 07:17:05.851339 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.134872 4893 generic.go:334] "Generic (PLEG): container finished" podID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerID="fd51858a23e46a3f6ec2c98ea063491b155d092cb6edbea569d1d45b5b29c489" exitCode=0 Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.134949 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cxgvk" event={"ID":"07b9c0c4-505d-4af3-ac57-3a379550f85f","Type":"ContainerDied","Data":"fd51858a23e46a3f6ec2c98ea063491b155d092cb6edbea569d1d45b5b29c489"} Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.136454 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.384074 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.520197 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config\") pod \"07b9c0c4-505d-4af3-ac57-3a379550f85f\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.520248 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb\") pod \"07b9c0c4-505d-4af3-ac57-3a379550f85f\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.520311 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc\") pod \"07b9c0c4-505d-4af3-ac57-3a379550f85f\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.520488 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8st\" (UniqueName: \"kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st\") pod \"07b9c0c4-505d-4af3-ac57-3a379550f85f\" (UID: \"07b9c0c4-505d-4af3-ac57-3a379550f85f\") " Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.524295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st" (OuterVolumeSpecName: "kube-api-access-2s8st") pod "07b9c0c4-505d-4af3-ac57-3a379550f85f" (UID: "07b9c0c4-505d-4af3-ac57-3a379550f85f"). InnerVolumeSpecName "kube-api-access-2s8st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:06 crc kubenswrapper[4893]: W0121 07:17:06.613883 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dc0b4f1_9681_4ced_8f2d_c67fbbeca8b9.slice/crio-6a6afe3eb65091176eb9d36a8db9bc2c14ad5335b5982e1d983351e2024cfd0f WatchSource:0}: Error finding container 6a6afe3eb65091176eb9d36a8db9bc2c14ad5335b5982e1d983351e2024cfd0f: Status 404 returned error can't find the container with id 6a6afe3eb65091176eb9d36a8db9bc2c14ad5335b5982e1d983351e2024cfd0f Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.615131 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config" (OuterVolumeSpecName: "config") pod "07b9c0c4-505d-4af3-ac57-3a379550f85f" (UID: "07b9c0c4-505d-4af3-ac57-3a379550f85f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.615866 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07b9c0c4-505d-4af3-ac57-3a379550f85f" (UID: "07b9c0c4-505d-4af3-ac57-3a379550f85f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.621705 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8st\" (UniqueName: \"kubernetes.io/projected/07b9c0c4-505d-4af3-ac57-3a379550f85f-kube-api-access-2s8st\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.621733 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.621745 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.633420 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.637197 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07b9c0c4-505d-4af3-ac57-3a379550f85f" (UID: "07b9c0c4-505d-4af3-ac57-3a379550f85f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.664067 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:17:06 crc kubenswrapper[4893]: E0121 07:17:06.664501 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="dnsmasq-dns" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.664523 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="dnsmasq-dns" Jan 21 07:17:06 crc kubenswrapper[4893]: E0121 07:17:06.664582 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="init" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.664589 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="init" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.664779 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" containerName="dnsmasq-dns" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.670649 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.672846 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.673204 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.673412 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.673563 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mthsx" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.699305 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.723772 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07b9c0c4-505d-4af3-ac57-3a379550f85f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.825299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.825751 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fh72\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.825832 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.825866 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.825914 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927269 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927326 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927453 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927486 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fh72\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.927832 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: E0121 07:17:06.927935 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:06 crc kubenswrapper[4893]: E0121 07:17:06.927955 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:06 crc kubenswrapper[4893]: E0121 07:17:06.928009 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:07.427988854 +0000 UTC m=+1368.658334756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.928191 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.928356 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.948816 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:06 crc kubenswrapper[4893]: I0121 07:17:06.953754 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fh72\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.143526 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerStarted","Data":"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1"} Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.143574 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerStarted","Data":"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477"} Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.143704 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.145067 4893 generic.go:334] "Generic (PLEG): container finished" podID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerID="8f8af5b9407014ae9636590d0a04ce6566a90295eb96e378aabe705dfc7f2f6d" exitCode=0 Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.145155 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" event={"ID":"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9","Type":"ContainerDied","Data":"8f8af5b9407014ae9636590d0a04ce6566a90295eb96e378aabe705dfc7f2f6d"} Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.145225 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" event={"ID":"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9","Type":"ContainerStarted","Data":"6a6afe3eb65091176eb9d36a8db9bc2c14ad5335b5982e1d983351e2024cfd0f"} Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.147188 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-cxgvk" event={"ID":"07b9c0c4-505d-4af3-ac57-3a379550f85f","Type":"ContainerDied","Data":"d1157acd868aeb7fb1e5e4c7271a65da4af57536caa5ec38d5b5f00dc5a8ffa8"} Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.147221 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-cxgvk" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.147262 4893 scope.go:117] "RemoveContainer" containerID="fd51858a23e46a3f6ec2c98ea063491b155d092cb6edbea569d1d45b5b29c489" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.171385 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.532462256 podStartE2EDuration="4.171363684s" podCreationTimestamp="2026-01-21 07:17:03 +0000 UTC" firstStartedPulling="2026-01-21 07:17:04.288823815 +0000 UTC m=+1365.519169717" lastFinishedPulling="2026-01-21 07:17:05.927725243 +0000 UTC m=+1367.158071145" observedRunningTime="2026-01-21 07:17:07.16674085 +0000 UTC m=+1368.397086762" watchObservedRunningTime="2026-01-21 07:17:07.171363684 +0000 UTC m=+1368.401709586" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.253251 4893 scope.go:117] "RemoveContainer" containerID="cef581f00464d9b6d3724086b93e9a9f075803c47d55f26fd69c7baef820cafa" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.253419 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.259877 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-cxgvk"] Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.443318 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:07 crc kubenswrapper[4893]: E0121 07:17:07.443535 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:07 crc kubenswrapper[4893]: E0121 07:17:07.443770 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:07 crc kubenswrapper[4893]: E0121 07:17:07.443830 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:08.443812215 +0000 UTC m=+1369.674158127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.559044 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.597894 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b9c0c4-505d-4af3-ac57-3a379550f85f" path="/var/lib/kubelet/pods/07b9c0c4-505d-4af3-ac57-3a379550f85f/volumes" Jan 21 07:17:07 crc kubenswrapper[4893]: I0121 07:17:07.646158 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 07:17:08 crc kubenswrapper[4893]: I0121 07:17:08.159804 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" event={"ID":"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9","Type":"ContainerStarted","Data":"61b59bbe45a6257a84294e3931fa1c34e15d8872b46ddbb3cdf0c4060438c2d7"} Jan 21 07:17:08 crc kubenswrapper[4893]: I0121 07:17:08.189767 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" podStartSLOduration=3.18974321 podStartE2EDuration="3.18974321s" podCreationTimestamp="2026-01-21 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:08.182253723 +0000 UTC m=+1369.412599625" watchObservedRunningTime="2026-01-21 07:17:08.18974321 +0000 UTC m=+1369.420089112" Jan 21 07:17:08 crc kubenswrapper[4893]: I0121 07:17:08.465584 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:08 crc kubenswrapper[4893]: E0121 07:17:08.465896 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:08 crc kubenswrapper[4893]: E0121 07:17:08.465918 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:08 crc kubenswrapper[4893]: E0121 07:17:08.465986 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:10.46596579 +0000 UTC m=+1371.696311692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:09 crc kubenswrapper[4893]: I0121 07:17:09.166472 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.542979 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:10 crc kubenswrapper[4893]: E0121 07:17:10.543234 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:10 crc kubenswrapper[4893]: E0121 07:17:10.544090 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:10 crc kubenswrapper[4893]: E0121 07:17:10.544168 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:14.544144922 +0000 UTC m=+1375.774490824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.560798 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-25ctr"] Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.564454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.566769 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.566802 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.569306 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.569820 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25ctr"] Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646036 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646211 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646368 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646449 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t525s\" (UniqueName: \"kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646474 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.646570 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.755937 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756232 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756332 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756349 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t525s\" (UniqueName: \"kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756422 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.756808 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.757573 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.757746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.768274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.770500 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.777475 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.778750 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t525s\" (UniqueName: \"kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s\") pod \"swift-ring-rebalance-25ctr\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:10 crc kubenswrapper[4893]: I0121 07:17:10.889426 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.183200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw" event={"ID":"80680178-a1d2-4135-8949-881dc7ac92ea","Type":"ContainerStarted","Data":"1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79"} Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.184634 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dfvzw" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.207344 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dfvzw" podStartSLOduration=2.723635793 podStartE2EDuration="52.20732309s" podCreationTimestamp="2026-01-21 07:16:19 +0000 UTC" firstStartedPulling="2026-01-21 07:16:20.890743199 +0000 UTC m=+1322.121089101" lastFinishedPulling="2026-01-21 07:17:10.374430496 +0000 UTC m=+1371.604776398" observedRunningTime="2026-01-21 07:17:11.202250383 +0000 UTC m=+1372.432596305" watchObservedRunningTime="2026-01-21 07:17:11.20732309 +0000 UTC m=+1372.437668992" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.820014 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-98v6n"] Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.822098 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.829607 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.831253 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-98v6n"] Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.854354 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25ctr"] Jan 21 07:17:11 crc kubenswrapper[4893]: W0121 07:17:11.856095 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaef5d13_59d3_4ca2_b6c9_77b9616a91c8.slice/crio-2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2 WatchSource:0}: Error finding container 2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2: Status 404 returned error can't find the container with id 2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2 Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.978521 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vb2\" (UniqueName: \"kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:11 crc kubenswrapper[4893]: I0121 07:17:11.978936 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.081407 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4vb2\" (UniqueName: \"kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.081602 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.082502 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.124050 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4vb2\" (UniqueName: \"kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2\") pod \"root-account-create-update-98v6n\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.140714 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.163036 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.199461 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.200126 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.214534 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"299c3f15-e0c0-4017-ac39-e3a2f0764928","Type":"ContainerStarted","Data":"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927"} Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.215060 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.217389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25ctr" event={"ID":"caef5d13-59d3-4ca2-b6c9-77b9616a91c8","Type":"ContainerStarted","Data":"2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2"} Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.234776 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.8920586959999999 podStartE2EDuration="57.234736747s" podCreationTimestamp="2026-01-21 07:16:15 +0000 UTC" firstStartedPulling="2026-01-21 07:16:16.052540152 +0000 UTC m=+1317.282886054" lastFinishedPulling="2026-01-21 07:17:11.395218203 +0000 UTC m=+1372.625564105" observedRunningTime="2026-01-21 07:17:12.230410522 +0000 UTC m=+1373.460756454" watchObservedRunningTime="2026-01-21 07:17:12.234736747 +0000 UTC m=+1373.465082649" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.326835 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 07:17:12 crc kubenswrapper[4893]: I0121 07:17:12.921281 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-98v6n"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.234972 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-98v6n" event={"ID":"aad4df2c-372c-4aa7-9676-9e3cb7bff64d","Type":"ContainerStarted","Data":"f4d654e94f24ab9bf45d470f0983ee4b3b63d8394960bf3c3b6ac0999fa9107d"} Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.235053 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-98v6n" event={"ID":"aad4df2c-372c-4aa7-9676-9e3cb7bff64d","Type":"ContainerStarted","Data":"dce1e6fcb2234da325ef638592daa9fed1809a3e7eba272f0101dac5bb3b0f90"} Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.255315 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-98v6n" podStartSLOduration=2.255290377 podStartE2EDuration="2.255290377s" podCreationTimestamp="2026-01-21 07:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:13.247426309 +0000 UTC m=+1374.477772211" watchObservedRunningTime="2026-01-21 07:17:13.255290377 +0000 UTC m=+1374.485636279" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.320095 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.471380 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-md8wl"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.472688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.475220 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.475276 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sqlt\" (UniqueName: \"kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.481789 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-md8wl"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.577170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.577212 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sqlt\" (UniqueName: \"kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.578120 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.606239 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d3cd-account-create-update-xzcsc"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.607706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.609275 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sqlt\" (UniqueName: \"kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt\") pod \"placement-db-create-md8wl\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.611715 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.617916 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d3cd-account-create-update-xzcsc"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.679204 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zljfh\" (UniqueName: \"kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.679294 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.782165 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.782318 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zljfh\" (UniqueName: \"kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.783035 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.783456 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-24ht6"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.801269 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-md8wl" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.801451 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-24ht6" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.803639 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zljfh\" (UniqueName: \"kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh\") pod \"placement-d3cd-account-create-update-xzcsc\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.814799 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-24ht6"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.884069 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.884163 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkl5\" (UniqueName: \"kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.897633 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-eb0d-account-create-update-twv2l"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.899158 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.906177 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.906328 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-eb0d-account-create-update-twv2l"] Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.967801 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.986626 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.987219 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkl5\" (UniqueName: \"kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:13 crc kubenswrapper[4893]: I0121 07:17:13.987963 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.005597 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkl5\" (UniqueName: \"kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5\") pod \"glance-db-create-24ht6\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " pod="openstack/glance-db-create-24ht6" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.089329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.089556 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475tc\" (UniqueName: \"kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.155336 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-24ht6" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.190314 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475tc\" (UniqueName: \"kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.190391 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.191115 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.220005 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475tc\" (UniqueName: \"kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc\") pod \"glance-eb0d-account-create-update-twv2l\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.220456 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.248147 4893 generic.go:334] "Generic (PLEG): container finished" podID="aad4df2c-372c-4aa7-9676-9e3cb7bff64d" containerID="f4d654e94f24ab9bf45d470f0983ee4b3b63d8394960bf3c3b6ac0999fa9107d" exitCode=0 Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.248226 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-98v6n" event={"ID":"aad4df2c-372c-4aa7-9676-9e3cb7bff64d","Type":"ContainerDied","Data":"f4d654e94f24ab9bf45d470f0983ee4b3b63d8394960bf3c3b6ac0999fa9107d"} Jan 21 07:17:14 crc kubenswrapper[4893]: I0121 07:17:14.597472 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:14 crc kubenswrapper[4893]: E0121 07:17:14.597736 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:14 crc kubenswrapper[4893]: E0121 07:17:14.597770 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:14 crc kubenswrapper[4893]: E0121 07:17:14.597840 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:22.597822382 +0000 UTC m=+1383.828168284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:15 crc kubenswrapper[4893]: I0121 07:17:15.852894 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:17:15 crc kubenswrapper[4893]: I0121 07:17:15.911701 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:15 crc kubenswrapper[4893]: I0121 07:17:15.911991 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="dnsmasq-dns" containerID="cri-o://65324b6e03f7d7166555441ac9620268b38765894af89d5e281b6859ee3047ca" gracePeriod=10 Jan 21 07:17:17 crc kubenswrapper[4893]: I0121 07:17:17.382343 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.479597 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-98v6n" event={"ID":"aad4df2c-372c-4aa7-9676-9e3cb7bff64d","Type":"ContainerDied","Data":"dce1e6fcb2234da325ef638592daa9fed1809a3e7eba272f0101dac5bb3b0f90"} Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.480148 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dce1e6fcb2234da325ef638592daa9fed1809a3e7eba272f0101dac5bb3b0f90" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.483940 4893 generic.go:334] "Generic (PLEG): container finished" podID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerID="65324b6e03f7d7166555441ac9620268b38765894af89d5e281b6859ee3047ca" exitCode=0 Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.483987 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" event={"ID":"8a744009-f5d7-4d59-bb5b-1668a715e9d0","Type":"ContainerDied","Data":"65324b6e03f7d7166555441ac9620268b38765894af89d5e281b6859ee3047ca"} Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.668295 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.749832 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.759912 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqlxj\" (UniqueName: \"kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj\") pod \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760008 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4vb2\" (UniqueName: \"kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2\") pod \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760054 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config\") pod \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760089 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc\") pod \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760157 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts\") pod \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\" (UID: \"aad4df2c-372c-4aa7-9676-9e3cb7bff64d\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760183 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb\") pod \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.760205 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb\") pod \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\" (UID: \"8a744009-f5d7-4d59-bb5b-1668a715e9d0\") " Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.762213 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aad4df2c-372c-4aa7-9676-9e3cb7bff64d" (UID: "aad4df2c-372c-4aa7-9676-9e3cb7bff64d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.765931 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2" (OuterVolumeSpecName: "kube-api-access-g4vb2") pod "aad4df2c-372c-4aa7-9676-9e3cb7bff64d" (UID: "aad4df2c-372c-4aa7-9676-9e3cb7bff64d"). InnerVolumeSpecName "kube-api-access-g4vb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.773447 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj" (OuterVolumeSpecName: "kube-api-access-mqlxj") pod "8a744009-f5d7-4d59-bb5b-1668a715e9d0" (UID: "8a744009-f5d7-4d59-bb5b-1668a715e9d0"). InnerVolumeSpecName "kube-api-access-mqlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.834851 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a744009-f5d7-4d59-bb5b-1668a715e9d0" (UID: "8a744009-f5d7-4d59-bb5b-1668a715e9d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.835877 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config" (OuterVolumeSpecName: "config") pod "8a744009-f5d7-4d59-bb5b-1668a715e9d0" (UID: "8a744009-f5d7-4d59-bb5b-1668a715e9d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.836611 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a744009-f5d7-4d59-bb5b-1668a715e9d0" (UID: "8a744009-f5d7-4d59-bb5b-1668a715e9d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.837746 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a744009-f5d7-4d59-bb5b-1668a715e9d0" (UID: "8a744009-f5d7-4d59-bb5b-1668a715e9d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862836 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4vb2\" (UniqueName: \"kubernetes.io/projected/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-kube-api-access-g4vb2\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862879 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862893 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862904 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad4df2c-372c-4aa7-9676-9e3cb7bff64d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862915 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862926 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a744009-f5d7-4d59-bb5b-1668a715e9d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:18 crc kubenswrapper[4893]: I0121 07:17:18.862937 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqlxj\" (UniqueName: \"kubernetes.io/projected/8a744009-f5d7-4d59-bb5b-1668a715e9d0-kube-api-access-mqlxj\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.097852 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-md8wl"] Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.105392 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-eb0d-account-create-update-twv2l"] Jan 21 07:17:19 crc kubenswrapper[4893]: W0121 07:17:19.107974 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ef43a6a_4efc_4bcd_820b_1eade6c9b094.slice/crio-1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64 WatchSource:0}: Error finding container 1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64: Status 404 returned error can't find the container with id 1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64 Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.131604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.233484 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d3cd-account-create-update-xzcsc"] Jan 21 07:17:19 crc kubenswrapper[4893]: W0121 07:17:19.235111 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17944241_8ed8_4c71_a537_969b68cd694c.slice/crio-8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5 WatchSource:0}: Error finding container 8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5: Status 404 returned error can't find the container with id 8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5 Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.247813 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-24ht6"] Jan 21 07:17:19 crc kubenswrapper[4893]: W0121 07:17:19.248313 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede1b41d_0fd3_4c19_ba5d_bcfee1482f94.slice/crio-d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb WatchSource:0}: Error finding container d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb: Status 404 returned error can't find the container with id d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.504183 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-24ht6" event={"ID":"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94","Type":"ContainerStarted","Data":"134c50e743d45e05c1508cfc12adfa144d43dde145e51fe7555d549c1ecc51ac"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.504524 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-24ht6" event={"ID":"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94","Type":"ContainerStarted","Data":"d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.510198 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-twv2l" event={"ID":"1ef43a6a-4efc-4bcd-820b-1eade6c9b094","Type":"ContainerStarted","Data":"c4fe38b68118c34de33d63ec432bfb97337a100a6921fafdf5f9636ca503213d"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.510257 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-twv2l" event={"ID":"1ef43a6a-4efc-4bcd-820b-1eade6c9b094","Type":"ContainerStarted","Data":"1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.511794 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-md8wl" event={"ID":"ebb91341-a678-4bd6-96a9-8bad10274b2c","Type":"ContainerStarted","Data":"df0b7f029214d1da2427ff39dae72339bfb4533b9b1d5f03b3c2403dfaf05ae1"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.511852 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-md8wl" event={"ID":"ebb91341-a678-4bd6-96a9-8bad10274b2c","Type":"ContainerStarted","Data":"87898e40b401a3596e66b7f9027d67036723351bb6708d360af08852c8ce96db"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.515790 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-xzcsc" event={"ID":"17944241-8ed8-4c71-a537-969b68cd694c","Type":"ContainerStarted","Data":"e3cbe8a444dd54c0fcc9c3ee21ff05b32fc594b99ce96416c8c8c3e7e7c82a61"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.515864 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-xzcsc" event={"ID":"17944241-8ed8-4c71-a537-969b68cd694c","Type":"ContainerStarted","Data":"8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.517420 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25ctr" event={"ID":"caef5d13-59d3-4ca2-b6c9-77b9616a91c8","Type":"ContainerStarted","Data":"6e632adf6d95b5e1192d3f90676fdfd289e0608633172e8fb3a756deba8c6a46"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.525944 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-98v6n" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.526581 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" event={"ID":"8a744009-f5d7-4d59-bb5b-1668a715e9d0","Type":"ContainerDied","Data":"5c2f62b9131b04c0e9d4b812617118c205c89e10e017edcc88dc7b7acc5a4a7d"} Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.526624 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-5x46j" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.526647 4893 scope.go:117] "RemoveContainer" containerID="65324b6e03f7d7166555441ac9620268b38765894af89d5e281b6859ee3047ca" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.529475 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-24ht6" podStartSLOduration=6.529449339 podStartE2EDuration="6.529449339s" podCreationTimestamp="2026-01-21 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:19.523028113 +0000 UTC m=+1380.753374025" watchObservedRunningTime="2026-01-21 07:17:19.529449339 +0000 UTC m=+1380.759795241" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.544951 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-25ctr" podStartSLOduration=2.858253587 podStartE2EDuration="9.544926607s" podCreationTimestamp="2026-01-21 07:17:10 +0000 UTC" firstStartedPulling="2026-01-21 07:17:11.858644825 +0000 UTC m=+1373.088990717" lastFinishedPulling="2026-01-21 07:17:18.545317825 +0000 UTC m=+1379.775663737" observedRunningTime="2026-01-21 07:17:19.541287062 +0000 UTC m=+1380.771632974" watchObservedRunningTime="2026-01-21 07:17:19.544926607 +0000 UTC m=+1380.775272509" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.565350 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-eb0d-account-create-update-twv2l" podStartSLOduration=6.565320638 podStartE2EDuration="6.565320638s" podCreationTimestamp="2026-01-21 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:19.55399746 +0000 UTC m=+1380.784343372" watchObservedRunningTime="2026-01-21 07:17:19.565320638 +0000 UTC m=+1380.795666540" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.586936 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d3cd-account-create-update-xzcsc" podStartSLOduration=6.586905143 podStartE2EDuration="6.586905143s" podCreationTimestamp="2026-01-21 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:19.571386133 +0000 UTC m=+1380.801732035" watchObservedRunningTime="2026-01-21 07:17:19.586905143 +0000 UTC m=+1380.817251045" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.596093 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-md8wl" podStartSLOduration=6.596072048 podStartE2EDuration="6.596072048s" podCreationTimestamp="2026-01-21 07:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:19.586231213 +0000 UTC m=+1380.816577135" watchObservedRunningTime="2026-01-21 07:17:19.596072048 +0000 UTC m=+1380.826417950" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.787634 4893 scope.go:117] "RemoveContainer" containerID="af11d584f906af1277ca2b44d46a2809ddd4d0cdb7af9c129067ab76ba53cae3" Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.800128 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:19 crc kubenswrapper[4893]: I0121 07:17:19.808920 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-5x46j"] Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.535802 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-98v6n"] Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.536378 4893 generic.go:334] "Generic (PLEG): container finished" podID="17944241-8ed8-4c71-a537-969b68cd694c" containerID="e3cbe8a444dd54c0fcc9c3ee21ff05b32fc594b99ce96416c8c8c3e7e7c82a61" exitCode=0 Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.536418 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-xzcsc" event={"ID":"17944241-8ed8-4c71-a537-969b68cd694c","Type":"ContainerDied","Data":"e3cbe8a444dd54c0fcc9c3ee21ff05b32fc594b99ce96416c8c8c3e7e7c82a61"} Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.539780 4893 generic.go:334] "Generic (PLEG): container finished" podID="ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" containerID="134c50e743d45e05c1508cfc12adfa144d43dde145e51fe7555d549c1ecc51ac" exitCode=0 Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.539856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-24ht6" event={"ID":"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94","Type":"ContainerDied","Data":"134c50e743d45e05c1508cfc12adfa144d43dde145e51fe7555d549c1ecc51ac"} Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.546156 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-98v6n"] Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.548502 4893 generic.go:334] "Generic (PLEG): container finished" podID="1ef43a6a-4efc-4bcd-820b-1eade6c9b094" containerID="c4fe38b68118c34de33d63ec432bfb97337a100a6921fafdf5f9636ca503213d" exitCode=0 Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.548606 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-twv2l" event={"ID":"1ef43a6a-4efc-4bcd-820b-1eade6c9b094","Type":"ContainerDied","Data":"c4fe38b68118c34de33d63ec432bfb97337a100a6921fafdf5f9636ca503213d"} Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.559584 4893 generic.go:334] "Generic (PLEG): container finished" podID="ebb91341-a678-4bd6-96a9-8bad10274b2c" containerID="df0b7f029214d1da2427ff39dae72339bfb4533b9b1d5f03b3c2403dfaf05ae1" exitCode=0 Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.560512 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-md8wl" event={"ID":"ebb91341-a678-4bd6-96a9-8bad10274b2c","Type":"ContainerDied","Data":"df0b7f029214d1da2427ff39dae72339bfb4533b9b1d5f03b3c2403dfaf05ae1"} Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.561459 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nrkrm"] Jan 21 07:17:20 crc kubenswrapper[4893]: E0121 07:17:20.561920 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad4df2c-372c-4aa7-9676-9e3cb7bff64d" containerName="mariadb-account-create-update" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.561942 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad4df2c-372c-4aa7-9676-9e3cb7bff64d" containerName="mariadb-account-create-update" Jan 21 07:17:20 crc kubenswrapper[4893]: E0121 07:17:20.561976 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="init" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.561986 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="init" Jan 21 07:17:20 crc kubenswrapper[4893]: E0121 07:17:20.562006 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="dnsmasq-dns" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.562014 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="dnsmasq-dns" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.562248 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad4df2c-372c-4aa7-9676-9e3cb7bff64d" containerName="mariadb-account-create-update" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.562282 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" containerName="dnsmasq-dns" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.563078 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.564900 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.588115 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nrkrm"] Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.630155 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.630228 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshhp\" (UniqueName: \"kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.731614 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.731663 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshhp\" (UniqueName: \"kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.732497 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.749561 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshhp\" (UniqueName: \"kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp\") pod \"root-account-create-update-nrkrm\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:20 crc kubenswrapper[4893]: I0121 07:17:20.912316 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:21 crc kubenswrapper[4893]: I0121 07:17:21.365709 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nrkrm"] Jan 21 07:17:21 crc kubenswrapper[4893]: W0121 07:17:21.370695 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod467b78de_39ce_4f70_950d_7fcc3bd70e06.slice/crio-a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347 WatchSource:0}: Error finding container a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347: Status 404 returned error can't find the container with id a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347 Jan 21 07:17:21 crc kubenswrapper[4893]: I0121 07:17:21.568903 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nrkrm" event={"ID":"467b78de-39ce-4f70-950d-7fcc3bd70e06","Type":"ContainerStarted","Data":"a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347"} Jan 21 07:17:21 crc kubenswrapper[4893]: I0121 07:17:21.594234 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a744009-f5d7-4d59-bb5b-1668a715e9d0" path="/var/lib/kubelet/pods/8a744009-f5d7-4d59-bb5b-1668a715e9d0/volumes" Jan 21 07:17:21 crc kubenswrapper[4893]: I0121 07:17:21.595952 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad4df2c-372c-4aa7-9676-9e3cb7bff64d" path="/var/lib/kubelet/pods/aad4df2c-372c-4aa7-9676-9e3cb7bff64d/volumes" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.097440 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-24ht6" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.179494 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts\") pod \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.179778 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpkl5\" (UniqueName: \"kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5\") pod \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\" (UID: \"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.180512 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" (UID: "ede1b41d-0fd3-4c19-ba5d-bcfee1482f94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.186426 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.188306 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5" (OuterVolumeSpecName: "kube-api-access-qpkl5") pod "ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" (UID: "ede1b41d-0fd3-4c19-ba5d-bcfee1482f94"). InnerVolumeSpecName "kube-api-access-qpkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.192925 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.232194 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-md8wl" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.280895 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts\") pod \"17944241-8ed8-4c71-a537-969b68cd694c\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.280967 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zljfh\" (UniqueName: \"kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh\") pod \"17944241-8ed8-4c71-a537-969b68cd694c\" (UID: \"17944241-8ed8-4c71-a537-969b68cd694c\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.281043 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sqlt\" (UniqueName: \"kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt\") pod \"ebb91341-a678-4bd6-96a9-8bad10274b2c\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.281654 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17944241-8ed8-4c71-a537-969b68cd694c" (UID: "17944241-8ed8-4c71-a537-969b68cd694c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282054 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts\") pod \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282085 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475tc\" (UniqueName: \"kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc\") pod \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\" (UID: \"1ef43a6a-4efc-4bcd-820b-1eade6c9b094\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282123 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts\") pod \"ebb91341-a678-4bd6-96a9-8bad10274b2c\" (UID: \"ebb91341-a678-4bd6-96a9-8bad10274b2c\") " Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282735 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpkl5\" (UniqueName: \"kubernetes.io/projected/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-kube-api-access-qpkl5\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282751 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.282761 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17944241-8ed8-4c71-a537-969b68cd694c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.283389 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebb91341-a678-4bd6-96a9-8bad10274b2c" (UID: "ebb91341-a678-4bd6-96a9-8bad10274b2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.283471 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ef43a6a-4efc-4bcd-820b-1eade6c9b094" (UID: "1ef43a6a-4efc-4bcd-820b-1eade6c9b094"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.285500 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh" (OuterVolumeSpecName: "kube-api-access-zljfh") pod "17944241-8ed8-4c71-a537-969b68cd694c" (UID: "17944241-8ed8-4c71-a537-969b68cd694c"). InnerVolumeSpecName "kube-api-access-zljfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.286135 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc" (OuterVolumeSpecName: "kube-api-access-475tc") pod "1ef43a6a-4efc-4bcd-820b-1eade6c9b094" (UID: "1ef43a6a-4efc-4bcd-820b-1eade6c9b094"). InnerVolumeSpecName "kube-api-access-475tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.286212 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt" (OuterVolumeSpecName: "kube-api-access-5sqlt") pod "ebb91341-a678-4bd6-96a9-8bad10274b2c" (UID: "ebb91341-a678-4bd6-96a9-8bad10274b2c"). InnerVolumeSpecName "kube-api-access-5sqlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.384515 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.384829 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475tc\" (UniqueName: \"kubernetes.io/projected/1ef43a6a-4efc-4bcd-820b-1eade6c9b094-kube-api-access-475tc\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.384842 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb91341-a678-4bd6-96a9-8bad10274b2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.384851 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zljfh\" (UniqueName: \"kubernetes.io/projected/17944241-8ed8-4c71-a537-969b68cd694c-kube-api-access-zljfh\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.384861 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sqlt\" (UniqueName: \"kubernetes.io/projected/ebb91341-a678-4bd6-96a9-8bad10274b2c-kube-api-access-5sqlt\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.580762 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-md8wl" event={"ID":"ebb91341-a678-4bd6-96a9-8bad10274b2c","Type":"ContainerDied","Data":"87898e40b401a3596e66b7f9027d67036723351bb6708d360af08852c8ce96db"} Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.580789 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-md8wl" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.580798 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87898e40b401a3596e66b7f9027d67036723351bb6708d360af08852c8ce96db" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.585263 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-xzcsc" event={"ID":"17944241-8ed8-4c71-a537-969b68cd694c","Type":"ContainerDied","Data":"8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5"} Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.585317 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d024d486b12f84f40731a32746fb5d88f2d6b2ad20582b33d7a751950494ba5" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.585274 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-xzcsc" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.588270 4893 generic.go:334] "Generic (PLEG): container finished" podID="467b78de-39ce-4f70-950d-7fcc3bd70e06" containerID="5128b041afc1911bb176e952146e5a9ed2f71e3f76e31ea29d8460a37e12eae5" exitCode=0 Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.588354 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nrkrm" event={"ID":"467b78de-39ce-4f70-950d-7fcc3bd70e06","Type":"ContainerDied","Data":"5128b041afc1911bb176e952146e5a9ed2f71e3f76e31ea29d8460a37e12eae5"} Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.591507 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-24ht6" event={"ID":"ede1b41d-0fd3-4c19-ba5d-bcfee1482f94","Type":"ContainerDied","Data":"d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb"} Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.591531 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d878c133ae5b920998a14f6aae009b58270b476095179fc1a01b371cb4ab3ebb" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.591533 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-24ht6" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.594060 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-twv2l" event={"ID":"1ef43a6a-4efc-4bcd-820b-1eade6c9b094","Type":"ContainerDied","Data":"1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64"} Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.594105 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb304f9a488cb2895fb15f5a0f375ed32b7a0bdfb171c721e05d162d7626e64" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.594202 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-twv2l" Jan 21 07:17:22 crc kubenswrapper[4893]: I0121 07:17:22.689234 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:22 crc kubenswrapper[4893]: E0121 07:17:22.689443 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:17:22 crc kubenswrapper[4893]: E0121 07:17:22.689460 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 07:17:22 crc kubenswrapper[4893]: E0121 07:17:22.689518 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift podName:1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b nodeName:}" failed. No retries permitted until 2026-01-21 07:17:38.689500176 +0000 UTC m=+1399.919846078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift") pod "swift-storage-0" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b") : configmap "swift-ring-files" not found Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.319135 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d2ca-account-create-update-2f7t4"] Jan 21 07:17:23 crc kubenswrapper[4893]: E0121 07:17:23.319693 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.319719 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: E0121 07:17:23.319739 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb91341-a678-4bd6-96a9-8bad10274b2c" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.319749 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb91341-a678-4bd6-96a9-8bad10274b2c" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: E0121 07:17:23.319780 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef43a6a-4efc-4bcd-820b-1eade6c9b094" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.319793 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef43a6a-4efc-4bcd-820b-1eade6c9b094" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: E0121 07:17:23.319813 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17944241-8ed8-4c71-a537-969b68cd694c" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.319823 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="17944241-8ed8-4c71-a537-969b68cd694c" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.320069 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb91341-a678-4bd6-96a9-8bad10274b2c" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.320102 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="17944241-8ed8-4c71-a537-969b68cd694c" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.320126 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef43a6a-4efc-4bcd-820b-1eade6c9b094" containerName="mariadb-account-create-update" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.320142 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" containerName="mariadb-database-create" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.321027 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.323232 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.328891 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lkn5t"] Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.330507 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.341704 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d2ca-account-create-update-2f7t4"] Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.364579 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lkn5t"] Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.401029 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.401365 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56gl\" (UniqueName: \"kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.401412 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.401452 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtck\" (UniqueName: \"kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.503010 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.503074 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j56gl\" (UniqueName: \"kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.503112 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.503149 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtck\" (UniqueName: \"kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.503910 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.504137 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.520145 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j56gl\" (UniqueName: \"kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl\") pod \"keystone-db-create-lkn5t\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.525325 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtck\" (UniqueName: \"kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck\") pod \"keystone-d2ca-account-create-update-2f7t4\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.647502 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.660464 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:23 crc kubenswrapper[4893]: I0121 07:17:23.931137 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.011947 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts\") pod \"467b78de-39ce-4f70-950d-7fcc3bd70e06\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.012888 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "467b78de-39ce-4f70-950d-7fcc3bd70e06" (UID: "467b78de-39ce-4f70-950d-7fcc3bd70e06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.013226 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshhp\" (UniqueName: \"kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp\") pod \"467b78de-39ce-4f70-950d-7fcc3bd70e06\" (UID: \"467b78de-39ce-4f70-950d-7fcc3bd70e06\") " Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.014105 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/467b78de-39ce-4f70-950d-7fcc3bd70e06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.020650 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp" (OuterVolumeSpecName: "kube-api-access-wshhp") pod "467b78de-39ce-4f70-950d-7fcc3bd70e06" (UID: "467b78de-39ce-4f70-950d-7fcc3bd70e06"). InnerVolumeSpecName "kube-api-access-wshhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.037911 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bwx97"] Jan 21 07:17:24 crc kubenswrapper[4893]: E0121 07:17:24.038328 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467b78de-39ce-4f70-950d-7fcc3bd70e06" containerName="mariadb-account-create-update" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.038349 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="467b78de-39ce-4f70-950d-7fcc3bd70e06" containerName="mariadb-account-create-update" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.038537 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="467b78de-39ce-4f70-950d-7fcc3bd70e06" containerName="mariadb-account-create-update" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.039718 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.042471 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v2k8v" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.044948 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.067987 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx97"] Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.115619 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.116037 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.116099 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.116159 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct25\" (UniqueName: \"kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.116222 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshhp\" (UniqueName: \"kubernetes.io/projected/467b78de-39ce-4f70-950d-7fcc3bd70e06-kube-api-access-wshhp\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.164163 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lkn5t"] Jan 21 07:17:24 crc kubenswrapper[4893]: W0121 07:17:24.165526 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d0c87f_f0de_4f54_bae1_7af24c5c7f38.slice/crio-760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8 WatchSource:0}: Error finding container 760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8: Status 404 returned error can't find the container with id 760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8 Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.218203 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.218311 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ct25\" (UniqueName: \"kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.218414 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.218483 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.223335 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.223868 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.235689 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.239123 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ct25\" (UniqueName: \"kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25\") pod \"glance-db-sync-bwx97\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.340104 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d2ca-account-create-update-2f7t4"] Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.399113 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx97" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.658783 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nrkrm" event={"ID":"467b78de-39ce-4f70-950d-7fcc3bd70e06","Type":"ContainerDied","Data":"a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347"} Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.659149 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ac12def4292279d0ec6c11036dd463ef61418a107f72d678c580bf27f69347" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.658813 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nrkrm" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.660841 4893 generic.go:334] "Generic (PLEG): container finished" podID="94d0c87f-f0de-4f54-bae1-7af24c5c7f38" containerID="9f91de278662270db29f59cfbb23346d22c9e3004cfc529d77b9771ee747b909" exitCode=0 Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.660903 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lkn5t" event={"ID":"94d0c87f-f0de-4f54-bae1-7af24c5c7f38","Type":"ContainerDied","Data":"9f91de278662270db29f59cfbb23346d22c9e3004cfc529d77b9771ee747b909"} Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.660961 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lkn5t" event={"ID":"94d0c87f-f0de-4f54-bae1-7af24c5c7f38","Type":"ContainerStarted","Data":"760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8"} Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.665587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d2ca-account-create-update-2f7t4" event={"ID":"b4b22b42-2fce-4972-a651-1b49ef7b008c","Type":"ContainerStarted","Data":"9ab02ce2de50211d9da3ffb737894cbb3f83a2978c45af1362cdfc15e9a83fe9"} Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.665641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d2ca-account-create-update-2f7t4" event={"ID":"b4b22b42-2fce-4972-a651-1b49ef7b008c","Type":"ContainerStarted","Data":"771e218c0d14a481fbc6dea53b5531ff52b5a4e22d1f9f6eb9bb85a8790df047"} Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.899488 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d2ca-account-create-update-2f7t4" podStartSLOduration=1.8994720950000001 podStartE2EDuration="1.899472095s" podCreationTimestamp="2026-01-21 07:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:24.702355486 +0000 UTC m=+1385.932701398" watchObservedRunningTime="2026-01-21 07:17:24.899472095 +0000 UTC m=+1386.129817997" Jan 21 07:17:24 crc kubenswrapper[4893]: I0121 07:17:24.907867 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx97"] Jan 21 07:17:24 crc kubenswrapper[4893]: W0121 07:17:24.909016 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff7b3eb_e8c3_4d58_932b_3738b1e8dffa.slice/crio-a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624 WatchSource:0}: Error finding container a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624: Status 404 returned error can't find the container with id a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624 Jan 21 07:17:25 crc kubenswrapper[4893]: I0121 07:17:25.415974 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 07:17:25 crc kubenswrapper[4893]: I0121 07:17:25.679850 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx97" event={"ID":"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa","Type":"ContainerStarted","Data":"a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624"} Jan 21 07:17:25 crc kubenswrapper[4893]: I0121 07:17:25.681799 4893 generic.go:334] "Generic (PLEG): container finished" podID="b4b22b42-2fce-4972-a651-1b49ef7b008c" containerID="9ab02ce2de50211d9da3ffb737894cbb3f83a2978c45af1362cdfc15e9a83fe9" exitCode=0 Jan 21 07:17:25 crc kubenswrapper[4893]: I0121 07:17:25.681982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d2ca-account-create-update-2f7t4" event={"ID":"b4b22b42-2fce-4972-a651-1b49ef7b008c","Type":"ContainerDied","Data":"9ab02ce2de50211d9da3ffb737894cbb3f83a2978c45af1362cdfc15e9a83fe9"} Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.062588 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.257719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts\") pod \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.257850 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j56gl\" (UniqueName: \"kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl\") pod \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\" (UID: \"94d0c87f-f0de-4f54-bae1-7af24c5c7f38\") " Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.258365 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94d0c87f-f0de-4f54-bae1-7af24c5c7f38" (UID: "94d0c87f-f0de-4f54-bae1-7af24c5c7f38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.265970 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl" (OuterVolumeSpecName: "kube-api-access-j56gl") pod "94d0c87f-f0de-4f54-bae1-7af24c5c7f38" (UID: "94d0c87f-f0de-4f54-bae1-7af24c5c7f38"). InnerVolumeSpecName "kube-api-access-j56gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.360412 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.361032 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j56gl\" (UniqueName: \"kubernetes.io/projected/94d0c87f-f0de-4f54-bae1-7af24c5c7f38-kube-api-access-j56gl\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.690873 4893 generic.go:334] "Generic (PLEG): container finished" podID="caef5d13-59d3-4ca2-b6c9-77b9616a91c8" containerID="6e632adf6d95b5e1192d3f90676fdfd289e0608633172e8fb3a756deba8c6a46" exitCode=0 Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.690954 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25ctr" event={"ID":"caef5d13-59d3-4ca2-b6c9-77b9616a91c8","Type":"ContainerDied","Data":"6e632adf6d95b5e1192d3f90676fdfd289e0608633172e8fb3a756deba8c6a46"} Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.696112 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lkn5t" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.696133 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lkn5t" event={"ID":"94d0c87f-f0de-4f54-bae1-7af24c5c7f38","Type":"ContainerDied","Data":"760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8"} Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.696172 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760e354a634a3a1e46aa9f1541732f083e8cdfc77d42099994f982a8cecb0fa8" Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.841526 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nrkrm"] Jan 21 07:17:26 crc kubenswrapper[4893]: I0121 07:17:26.854220 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nrkrm"] Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.018018 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.178066 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brtck\" (UniqueName: \"kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck\") pod \"b4b22b42-2fce-4972-a651-1b49ef7b008c\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.178119 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts\") pod \"b4b22b42-2fce-4972-a651-1b49ef7b008c\" (UID: \"b4b22b42-2fce-4972-a651-1b49ef7b008c\") " Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.180156 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4b22b42-2fce-4972-a651-1b49ef7b008c" (UID: "b4b22b42-2fce-4972-a651-1b49ef7b008c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.184895 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck" (OuterVolumeSpecName: "kube-api-access-brtck") pod "b4b22b42-2fce-4972-a651-1b49ef7b008c" (UID: "b4b22b42-2fce-4972-a651-1b49ef7b008c"). InnerVolumeSpecName "kube-api-access-brtck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.281947 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brtck\" (UniqueName: \"kubernetes.io/projected/b4b22b42-2fce-4972-a651-1b49ef7b008c-kube-api-access-brtck\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.281986 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22b42-2fce-4972-a651-1b49ef7b008c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.597707 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467b78de-39ce-4f70-950d-7fcc3bd70e06" path="/var/lib/kubelet/pods/467b78de-39ce-4f70-950d-7fcc3bd70e06/volumes" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.703923 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-2f7t4" Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.704740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d2ca-account-create-update-2f7t4" event={"ID":"b4b22b42-2fce-4972-a651-1b49ef7b008c","Type":"ContainerDied","Data":"771e218c0d14a481fbc6dea53b5531ff52b5a4e22d1f9f6eb9bb85a8790df047"} Jan 21 07:17:27 crc kubenswrapper[4893]: I0121 07:17:27.704790 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771e218c0d14a481fbc6dea53b5531ff52b5a4e22d1f9f6eb9bb85a8790df047" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.096177 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.196238 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.196526 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.197269 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.198419 4893 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.225984 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.299429 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.299508 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.299568 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.299665 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.299736 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t525s\" (UniqueName: \"kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s\") pod \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\" (UID: \"caef5d13-59d3-4ca2-b6c9-77b9616a91c8\") " Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.300161 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.300316 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.307712 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s" (OuterVolumeSpecName: "kube-api-access-t525s") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "kube-api-access-t525s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.315479 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.325798 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts" (OuterVolumeSpecName: "scripts") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.335834 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "caef5d13-59d3-4ca2-b6c9-77b9616a91c8" (UID: "caef5d13-59d3-4ca2-b6c9-77b9616a91c8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.470866 4893 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.470922 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.470936 4893 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.470950 4893 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.470963 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t525s\" (UniqueName: \"kubernetes.io/projected/caef5d13-59d3-4ca2-b6c9-77b9616a91c8-kube-api-access-t525s\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.656527 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.657192 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.717295 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25ctr" event={"ID":"caef5d13-59d3-4ca2-b6c9-77b9616a91c8","Type":"ContainerDied","Data":"2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2"} Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.717484 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ebd37e33a5fa9148e066fe24ab6c7ed296b436aa52cbdb8354700e0fc1da7a2" Jan 21 07:17:28 crc kubenswrapper[4893]: I0121 07:17:28.719076 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25ctr" Jan 21 07:17:29 crc kubenswrapper[4893]: I0121 07:17:29.730633 4893 generic.go:334] "Generic (PLEG): container finished" podID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerID="8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06" exitCode=0 Jan 21 07:17:29 crc kubenswrapper[4893]: I0121 07:17:29.730713 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerDied","Data":"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06"} Jan 21 07:17:30 crc kubenswrapper[4893]: I0121 07:17:30.738902 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerStarted","Data":"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa"} Jan 21 07:17:30 crc kubenswrapper[4893]: I0121 07:17:30.739475 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 07:17:30 crc kubenswrapper[4893]: I0121 07:17:30.741263 4893 generic.go:334] "Generic (PLEG): container finished" podID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerID="afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b" exitCode=0 Jan 21 07:17:30 crc kubenswrapper[4893]: I0121 07:17:30.741307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerDied","Data":"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b"} Jan 21 07:17:30 crc kubenswrapper[4893]: I0121 07:17:30.771436 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.127743774 podStartE2EDuration="1m21.771411147s" podCreationTimestamp="2026-01-21 07:16:09 +0000 UTC" firstStartedPulling="2026-01-21 07:16:11.441234819 +0000 UTC m=+1312.671580721" lastFinishedPulling="2026-01-21 07:16:56.084902192 +0000 UTC m=+1357.315248094" observedRunningTime="2026-01-21 07:17:30.763349514 +0000 UTC m=+1391.993695406" watchObservedRunningTime="2026-01-21 07:17:30.771411147 +0000 UTC m=+1392.001757049" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.751307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerStarted","Data":"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae"} Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.751898 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.778590 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.719016502 podStartE2EDuration="1m22.778570789s" podCreationTimestamp="2026-01-21 07:16:09 +0000 UTC" firstStartedPulling="2026-01-21 07:16:11.130170438 +0000 UTC m=+1312.360516340" lastFinishedPulling="2026-01-21 07:16:56.189724725 +0000 UTC m=+1357.420070627" observedRunningTime="2026-01-21 07:17:31.772548924 +0000 UTC m=+1393.002894836" watchObservedRunningTime="2026-01-21 07:17:31.778570789 +0000 UTC m=+1393.008916691" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.871442 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x5d5s"] Jan 21 07:17:31 crc kubenswrapper[4893]: E0121 07:17:31.871875 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d0c87f-f0de-4f54-bae1-7af24c5c7f38" containerName="mariadb-database-create" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.871893 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d0c87f-f0de-4f54-bae1-7af24c5c7f38" containerName="mariadb-database-create" Jan 21 07:17:31 crc kubenswrapper[4893]: E0121 07:17:31.871908 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b22b42-2fce-4972-a651-1b49ef7b008c" containerName="mariadb-account-create-update" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.871914 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b22b42-2fce-4972-a651-1b49ef7b008c" containerName="mariadb-account-create-update" Jan 21 07:17:31 crc kubenswrapper[4893]: E0121 07:17:31.871933 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caef5d13-59d3-4ca2-b6c9-77b9616a91c8" containerName="swift-ring-rebalance" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.871939 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="caef5d13-59d3-4ca2-b6c9-77b9616a91c8" containerName="swift-ring-rebalance" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.872091 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="caef5d13-59d3-4ca2-b6c9-77b9616a91c8" containerName="swift-ring-rebalance" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.872109 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b22b42-2fce-4972-a651-1b49ef7b008c" containerName="mariadb-account-create-update" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.872124 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d0c87f-f0de-4f54-bae1-7af24c5c7f38" containerName="mariadb-database-create" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.872776 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.876328 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 07:17:31 crc kubenswrapper[4893]: I0121 07:17:31.892867 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x5d5s"] Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.049403 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.049552 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxcd7\" (UniqueName: \"kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.150726 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxcd7\" (UniqueName: \"kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.150853 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.151871 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.173405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxcd7\" (UniqueName: \"kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7\") pod \"root-account-create-update-x5d5s\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:32 crc kubenswrapper[4893]: I0121 07:17:32.196616 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.477101 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.492942 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.708309 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dfvzw-config-jtlj8"] Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.709419 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.713379 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.721183 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dfvzw-config-jtlj8"] Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.886784 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxmc\" (UniqueName: \"kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.887186 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.887332 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.887492 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.887617 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.887789 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989577 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxmc\" (UniqueName: \"kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989661 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989762 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989802 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989835 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.989890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.990176 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.990298 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.990415 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.990773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:35 crc kubenswrapper[4893]: I0121 07:17:35.993841 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:36 crc kubenswrapper[4893]: I0121 07:17:36.011348 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxmc\" (UniqueName: \"kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc\") pod \"ovn-controller-dfvzw-config-jtlj8\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:36 crc kubenswrapper[4893]: I0121 07:17:36.030505 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:38 crc kubenswrapper[4893]: I0121 07:17:38.740448 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:38 crc kubenswrapper[4893]: I0121 07:17:38.759515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"swift-storage-0\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " pod="openstack/swift-storage-0" Jan 21 07:17:38 crc kubenswrapper[4893]: I0121 07:17:38.794639 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 07:17:40 crc kubenswrapper[4893]: I0121 07:17:40.418368 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" probeResult="failure" output=< Jan 21 07:17:40 crc kubenswrapper[4893]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 07:17:40 crc kubenswrapper[4893]: > Jan 21 07:17:40 crc kubenswrapper[4893]: I0121 07:17:40.421380 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 21 07:17:40 crc kubenswrapper[4893]: I0121 07:17:40.852762 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 21 07:17:41 crc kubenswrapper[4893]: E0121 07:17:41.021968 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 21 07:17:41 crc kubenswrapper[4893]: E0121 07:17:41.022153 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ct25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bwx97_openstack(fff7b3eb-e8c3-4d58-932b-3738b1e8dffa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:17:41 crc kubenswrapper[4893]: E0121 07:17:41.024157 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bwx97" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.557849 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x5d5s"] Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.693954 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dfvzw-config-jtlj8"] Jan 21 07:17:41 crc kubenswrapper[4893]: W0121 07:17:41.703627 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d168c1e_9d3c_4456_8d87_fdaee00cdcc9.slice/crio-8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e WatchSource:0}: Error finding container 8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e: Status 404 returned error can't find the container with id 8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.706432 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:17:41 crc kubenswrapper[4893]: W0121 07:17:41.707257 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dc34290_d23e_4d76_a6ea_dd2f4b1d9a0b.slice/crio-be7235450e40f3c081de47d36e67875962a250359752df40f0788f5e6c402593 WatchSource:0}: Error finding container be7235450e40f3c081de47d36e67875962a250359752df40f0788f5e6c402593: Status 404 returned error can't find the container with id be7235450e40f3c081de47d36e67875962a250359752df40f0788f5e6c402593 Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.850769 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"be7235450e40f3c081de47d36e67875962a250359752df40f0788f5e6c402593"} Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.854511 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5d5s" event={"ID":"a19944b7-16bb-416c-8d31-c1bdc47a65b3","Type":"ContainerStarted","Data":"7a92121d556490617443ac88942399ee3d25ee6302e354bf063b9e489ec4c6fa"} Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.854564 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5d5s" event={"ID":"a19944b7-16bb-416c-8d31-c1bdc47a65b3","Type":"ContainerStarted","Data":"cbcdb0a43b3c45e4ea247e4463e36ebf415f7d99cc97b36c0b0f99ab0d838b06"} Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.856551 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw-config-jtlj8" event={"ID":"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9","Type":"ContainerStarted","Data":"8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e"} Jan 21 07:17:41 crc kubenswrapper[4893]: E0121 07:17:41.856974 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-bwx97" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" Jan 21 07:17:41 crc kubenswrapper[4893]: I0121 07:17:41.873367 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x5d5s" podStartSLOduration=10.87334797 podStartE2EDuration="10.87334797s" podCreationTimestamp="2026-01-21 07:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:41.872575518 +0000 UTC m=+1403.102921440" watchObservedRunningTime="2026-01-21 07:17:41.87334797 +0000 UTC m=+1403.103693872" Jan 21 07:17:41 crc kubenswrapper[4893]: E0121 07:17:41.988031 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19944b7_16bb_416c_8d31_c1bdc47a65b3.slice/crio-7a92121d556490617443ac88942399ee3d25ee6302e354bf063b9e489ec4c6fa.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:17:42 crc kubenswrapper[4893]: I0121 07:17:42.869151 4893 generic.go:334] "Generic (PLEG): container finished" podID="0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" containerID="f215f8d498420359c64aca99b2de79273f19ec7f8b4b742bb3bb89b42bf73cc0" exitCode=0 Jan 21 07:17:42 crc kubenswrapper[4893]: I0121 07:17:42.869208 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw-config-jtlj8" event={"ID":"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9","Type":"ContainerDied","Data":"f215f8d498420359c64aca99b2de79273f19ec7f8b4b742bb3bb89b42bf73cc0"} Jan 21 07:17:42 crc kubenswrapper[4893]: I0121 07:17:42.871772 4893 generic.go:334] "Generic (PLEG): container finished" podID="a19944b7-16bb-416c-8d31-c1bdc47a65b3" containerID="7a92121d556490617443ac88942399ee3d25ee6302e354bf063b9e489ec4c6fa" exitCode=0 Jan 21 07:17:42 crc kubenswrapper[4893]: I0121 07:17:42.871824 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5d5s" event={"ID":"a19944b7-16bb-416c-8d31-c1bdc47a65b3","Type":"ContainerDied","Data":"7a92121d556490617443ac88942399ee3d25ee6302e354bf063b9e489ec4c6fa"} Jan 21 07:17:43 crc kubenswrapper[4893]: I0121 07:17:43.881806 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"10d753ac1428ba120d45a7811e9fca56f7ef4a1826bf444055d4ad6a929e369e"} Jan 21 07:17:43 crc kubenswrapper[4893]: I0121 07:17:43.882051 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"6b3241ca824451ec282b6865606bf40f2795c9f27b6217a6c0357120a18a6e9b"} Jan 21 07:17:43 crc kubenswrapper[4893]: I0121 07:17:43.882062 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"f9ccf20497fe8d76385ede790b0446d55927de6fa28eb3b5854f288b82fc7991"} Jan 21 07:17:43 crc kubenswrapper[4893]: I0121 07:17:43.882074 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"de3d17ba39098b400e960c4859abe64ec8453a5b7438c807895a08f576cb1c61"} Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.323760 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.331314 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488703 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488774 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488801 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxcd7\" (UniqueName: \"kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7\") pod \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488823 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxmc\" (UniqueName: \"kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488872 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488927 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts\") pod \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\" (UID: \"a19944b7-16bb-416c-8d31-c1bdc47a65b3\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488981 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.489023 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run\") pod \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\" (UID: \"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9\") " Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.488862 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.489346 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run" (OuterVolumeSpecName: "var-run") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.489423 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.489805 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a19944b7-16bb-416c-8d31-c1bdc47a65b3" (UID: "a19944b7-16bb-416c-8d31-c1bdc47a65b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.489816 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.490282 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts" (OuterVolumeSpecName: "scripts") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.494539 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7" (OuterVolumeSpecName: "kube-api-access-jxcd7") pod "a19944b7-16bb-416c-8d31-c1bdc47a65b3" (UID: "a19944b7-16bb-416c-8d31-c1bdc47a65b3"). InnerVolumeSpecName "kube-api-access-jxcd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.510525 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc" (OuterVolumeSpecName: "kube-api-access-gjxmc") pod "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" (UID: "0d168c1e-9d3c-4456-8d87-fdaee00cdcc9"). InnerVolumeSpecName "kube-api-access-gjxmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591266 4893 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591314 4893 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591329 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxcd7\" (UniqueName: \"kubernetes.io/projected/a19944b7-16bb-416c-8d31-c1bdc47a65b3-kube-api-access-jxcd7\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591342 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxmc\" (UniqueName: \"kubernetes.io/projected/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-kube-api-access-gjxmc\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591731 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591750 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19944b7-16bb-416c-8d31-c1bdc47a65b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591765 4893 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.591778 4893 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.894541 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw-config-jtlj8" event={"ID":"0d168c1e-9d3c-4456-8d87-fdaee00cdcc9","Type":"ContainerDied","Data":"8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e"} Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.894598 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab987ceec4d8b604a3755e834f58809e7ff1276bcfa63a45031cbf75f164a6e" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.894599 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw-config-jtlj8" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.897048 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5d5s" event={"ID":"a19944b7-16bb-416c-8d31-c1bdc47a65b3","Type":"ContainerDied","Data":"cbcdb0a43b3c45e4ea247e4463e36ebf415f7d99cc97b36c0b0f99ab0d838b06"} Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.897097 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbcdb0a43b3c45e4ea247e4463e36ebf415f7d99cc97b36c0b0f99ab0d838b06" Jan 21 07:17:44 crc kubenswrapper[4893]: I0121 07:17:44.897123 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5d5s" Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.444722 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dfvzw" Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.491062 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dfvzw-config-jtlj8"] Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.499511 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dfvzw-config-jtlj8"] Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.604647 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" path="/var/lib/kubelet/pods/0d168c1e-9d3c-4456-8d87-fdaee00cdcc9/volumes" Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.913557 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"28f0c95878d18811a355a7d69ad8da527d18fc77022addfeaf2830eb6d3f6a58"} Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.913601 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"6c4ea7e3f7722a19ae4b7b9d432e39556b9257e17680f7182fa23f27573643bf"} Jan 21 07:17:45 crc kubenswrapper[4893]: I0121 07:17:45.913613 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"651e96881878d275dfe2a4a1c62471fd4cf86d8d8127d90f3d7087add5021953"} Jan 21 07:17:46 crc kubenswrapper[4893]: I0121 07:17:46.923151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"547e0cdd8689c56343dabedff5738e3f1d04a8d69d96acd746b136ec28002be6"} Jan 21 07:17:47 crc kubenswrapper[4893]: I0121 07:17:47.935511 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"8d08cf04d81866f12ef2bd434ff7bf4f3ff11d56786d98d0d1fac803ddd360ca"} Jan 21 07:17:47 crc kubenswrapper[4893]: I0121 07:17:47.936081 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"f63f20cab196ddeafd343f3658285555d44901b431007768efc93ae2a8129f02"} Jan 21 07:17:47 crc kubenswrapper[4893]: I0121 07:17:47.936107 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"bce00b11d38795c86e6d149154dd8aac8c079d3c7fa177fce0f83ff6166a6875"} Jan 21 07:17:48 crc kubenswrapper[4893]: I0121 07:17:48.955524 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"6a87281c1caeb6e4039eec07e768f22f9b309659361f188928eda3e3a1dbb21a"} Jan 21 07:17:48 crc kubenswrapper[4893]: I0121 07:17:48.956153 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"7b5731c8b577d290be2d86e362e9bb9f2c16bff9031dd2e710aba07ac2ce04ed"} Jan 21 07:17:48 crc kubenswrapper[4893]: I0121 07:17:48.956172 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"463d18ccd25d3b9dfd2bc47bf68e566d842db8f27cec0e30693b206ff7b49443"} Jan 21 07:17:48 crc kubenswrapper[4893]: I0121 07:17:48.956189 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerStarted","Data":"99f8c2ddbd19e36b260905f52c96953335f374caada62eaa5e2f0f5d967d416d"} Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.001512 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.439583664 podStartE2EDuration="44.001481467s" podCreationTimestamp="2026-01-21 07:17:05 +0000 UTC" firstStartedPulling="2026-01-21 07:17:41.719713021 +0000 UTC m=+1402.950058923" lastFinishedPulling="2026-01-21 07:17:47.281610814 +0000 UTC m=+1408.511956726" observedRunningTime="2026-01-21 07:17:48.99432932 +0000 UTC m=+1410.224675232" watchObservedRunningTime="2026-01-21 07:17:49.001481467 +0000 UTC m=+1410.231827389" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.371978 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:17:49 crc kubenswrapper[4893]: E0121 07:17:49.372620 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19944b7-16bb-416c-8d31-c1bdc47a65b3" containerName="mariadb-account-create-update" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.372651 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19944b7-16bb-416c-8d31-c1bdc47a65b3" containerName="mariadb-account-create-update" Jan 21 07:17:49 crc kubenswrapper[4893]: E0121 07:17:49.372737 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" containerName="ovn-config" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.372763 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" containerName="ovn-config" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.373030 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19944b7-16bb-416c-8d31-c1bdc47a65b3" containerName="mariadb-account-create-update" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.373072 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d168c1e-9d3c-4456-8d87-fdaee00cdcc9" containerName="ovn-config" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.374706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.379129 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423136 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423203 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423227 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w88bp\" (UniqueName: \"kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423278 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.423331 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.436057 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525154 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525202 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525237 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w88bp\" (UniqueName: \"kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525324 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.525426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.526373 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.526397 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.526666 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.526773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.528129 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.552630 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w88bp\" (UniqueName: \"kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp\") pod \"dnsmasq-dns-8db84466c-kt44r\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:49 crc kubenswrapper[4893]: I0121 07:17:49.743310 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.163707 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:17:50 crc kubenswrapper[4893]: W0121 07:17:50.164504 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34db2d2f_d623_4567_b27b_12b205e66587.slice/crio-4d13898b9d0cafa0ab23acf1298b85df05549f0a2fea479c59bc25828a5bd49e WatchSource:0}: Error finding container 4d13898b9d0cafa0ab23acf1298b85df05549f0a2fea479c59bc25828a5bd49e: Status 404 returned error can't find the container with id 4d13898b9d0cafa0ab23acf1298b85df05549f0a2fea479c59bc25828a5bd49e Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.421905 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.852207 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.976453 4893 generic.go:334] "Generic (PLEG): container finished" podID="34db2d2f-d623-4567-b27b-12b205e66587" containerID="033bb8d5f8aaa247363d6b97db925066e80a3d14612e7799b689c5b2a0a5b7a4" exitCode=0 Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.976498 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-kt44r" event={"ID":"34db2d2f-d623-4567-b27b-12b205e66587","Type":"ContainerDied","Data":"033bb8d5f8aaa247363d6b97db925066e80a3d14612e7799b689c5b2a0a5b7a4"} Jan 21 07:17:50 crc kubenswrapper[4893]: I0121 07:17:50.976532 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-kt44r" event={"ID":"34db2d2f-d623-4567-b27b-12b205e66587","Type":"ContainerStarted","Data":"4d13898b9d0cafa0ab23acf1298b85df05549f0a2fea479c59bc25828a5bd49e"} Jan 21 07:17:51 crc kubenswrapper[4893]: I0121 07:17:51.987079 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-kt44r" event={"ID":"34db2d2f-d623-4567-b27b-12b205e66587","Type":"ContainerStarted","Data":"7108e349b8317054374147ccd214b5fd604f0c3792e4a548e64c7029b570f254"} Jan 21 07:17:51 crc kubenswrapper[4893]: I0121 07:17:51.988444 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.025385 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-kt44r" podStartSLOduration=3.025357939 podStartE2EDuration="3.025357939s" podCreationTimestamp="2026-01-21 07:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:52.015950667 +0000 UTC m=+1413.246296579" watchObservedRunningTime="2026-01-21 07:17:52.025357939 +0000 UTC m=+1413.255703841" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.262173 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-t7q6q"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.263300 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.284837 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t7q6q"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.362125 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-lzdts"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.363260 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.373650 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.373769 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8248t\" (UniqueName: \"kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.392856 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lzdts"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.401371 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2981-account-create-update-bkdjc"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.402809 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.408693 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.413616 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2981-account-create-update-bkdjc"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475526 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475607 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m2t\" (UniqueName: \"kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475642 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj9jm\" (UniqueName: \"kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475717 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475774 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.475855 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8248t\" (UniqueName: \"kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.477391 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.484171 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5653-account-create-update-7rtr9"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.485200 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.488737 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.520720 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8248t\" (UniqueName: \"kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t\") pod \"cinder-db-create-t7q6q\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.546042 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5653-account-create-update-7rtr9"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.553412 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-d9tjm"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.557323 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.572865 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.573223 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.573365 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.573506 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9b7vz" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577465 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577603 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sc4s\" (UniqueName: \"kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577652 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577769 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6kkx\" (UniqueName: \"kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577834 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9m2t\" (UniqueName: \"kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577857 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj9jm\" (UniqueName: \"kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577926 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.577974 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.578836 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.578891 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.583477 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d9tjm"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.602459 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj9jm\" (UniqueName: \"kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm\") pod \"cinder-2981-account-create-update-bkdjc\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.603325 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9m2t\" (UniqueName: \"kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t\") pod \"barbican-db-create-lzdts\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.716130 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.720785 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.730891 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.731068 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sc4s\" (UniqueName: \"kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.734978 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.735110 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6kkx\" (UniqueName: \"kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.735465 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.735595 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.744624 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.747470 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.751104 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.758446 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6kkx\" (UniqueName: \"kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx\") pod \"barbican-5653-account-create-update-7rtr9\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.767630 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sc4s\" (UniqueName: \"kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s\") pod \"keystone-db-sync-d9tjm\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.778104 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lqzk6"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.779312 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.786371 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqzk6"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.800551 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.836893 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.837006 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlwb\" (UniqueName: \"kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.865836 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-22df-account-create-update-qdjbd"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.866909 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.883018 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.886317 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-22df-account-create-update-qdjbd"] Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.894228 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.940325 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.940396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khlwb\" (UniqueName: \"kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.940519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.940602 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.951687 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:52 crc kubenswrapper[4893]: I0121 07:17:52.968727 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khlwb\" (UniqueName: \"kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb\") pod \"neutron-db-create-lqzk6\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.043356 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.043648 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.044882 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.063781 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl\") pod \"neutron-22df-account-create-update-qdjbd\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.219313 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.240433 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.331998 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t7q6q"] Jan 21 07:17:53 crc kubenswrapper[4893]: I0121 07:17:53.359568 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lzdts"] Jan 21 07:17:53 crc kubenswrapper[4893]: W0121 07:17:53.399981 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb624b5ac_d2e6_442d_8411_656210764688.slice/crio-1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc WatchSource:0}: Error finding container 1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc: Status 404 returned error can't find the container with id 1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:53.573401 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2981-account-create-update-bkdjc"] Jan 21 07:17:54 crc kubenswrapper[4893]: W0121 07:17:53.577936 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3284c32_3995_4e0e_a6ee_15a79317eaab.slice/crio-9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c WatchSource:0}: Error finding container 9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c: Status 404 returned error can't find the container with id 9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:53.652033 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5653-account-create-update-7rtr9"] Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:53.700481 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d9tjm"] Jan 21 07:17:54 crc kubenswrapper[4893]: W0121 07:17:53.717110 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ba738c9_29c6_492d_acdc_0854042df9dc.slice/crio-4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c WatchSource:0}: Error finding container 4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c: Status 404 returned error can't find the container with id 4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:53.783647 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqzk6"] Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.014708 4893 generic.go:334] "Generic (PLEG): container finished" podID="0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" containerID="661a1510b0e926ed9d58cfc9a102f808fac860f1c22613744ef458127e547ca8" exitCode=0 Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.014778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t7q6q" event={"ID":"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a","Type":"ContainerDied","Data":"661a1510b0e926ed9d58cfc9a102f808fac860f1c22613744ef458127e547ca8"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.014807 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t7q6q" event={"ID":"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a","Type":"ContainerStarted","Data":"5f4cf89c89b0da285654db8505294ce348b0a61d9a4539997a41751538e65634"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.021735 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-bkdjc" event={"ID":"f3284c32-3995-4e0e-a6ee-15a79317eaab","Type":"ContainerStarted","Data":"5cf697339569b6905b4af0edeaed0b9a8480bee6dfdf516cd425bdcc946ee1f5"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.021778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-bkdjc" event={"ID":"f3284c32-3995-4e0e-a6ee-15a79317eaab","Type":"ContainerStarted","Data":"9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.032796 4893 generic.go:334] "Generic (PLEG): container finished" podID="b624b5ac-d2e6-442d-8411-656210764688" containerID="6e04b4fbee9b3e703fb5de3ea31e81e68cd2ff62d9d541b6c2ee1f9927f27fdf" exitCode=0 Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.032905 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lzdts" event={"ID":"b624b5ac-d2e6-442d-8411-656210764688","Type":"ContainerDied","Data":"6e04b4fbee9b3e703fb5de3ea31e81e68cd2ff62d9d541b6c2ee1f9927f27fdf"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.032935 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lzdts" event={"ID":"b624b5ac-d2e6-442d-8411-656210764688","Type":"ContainerStarted","Data":"1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.038101 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-7rtr9" event={"ID":"6be46ad8-4b10-4fdf-8c34-e9003a28acbd","Type":"ContainerStarted","Data":"98f97492814f86a56e1b1582cbf2a87660019e0cb9f8aa9789265d7d90cf2c62"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.038142 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-7rtr9" event={"ID":"6be46ad8-4b10-4fdf-8c34-e9003a28acbd","Type":"ContainerStarted","Data":"c0c7ccbf98064e30266cdbe88566f42d42d6222fa5376cff9ab959f5fff23b0a"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.041585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d9tjm" event={"ID":"3ba738c9-29c6-492d-acdc-0854042df9dc","Type":"ContainerStarted","Data":"4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.046627 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqzk6" event={"ID":"552962c7-46c0-4f3f-826e-3c99b06f6c61","Type":"ContainerStarted","Data":"97ddb3d225615594c9a605add1044c2cb99f516ef5f324d2b45eb06de4cb1505"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.046706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqzk6" event={"ID":"552962c7-46c0-4f3f-826e-3c99b06f6c61","Type":"ContainerStarted","Data":"f77eebdb48421742f098d333ae240cd18c66a9017d4744e1d160524a2c775fd1"} Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.054329 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2981-account-create-update-bkdjc" podStartSLOduration=2.054305525 podStartE2EDuration="2.054305525s" podCreationTimestamp="2026-01-21 07:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:54.048848767 +0000 UTC m=+1415.279194659" watchObservedRunningTime="2026-01-21 07:17:54.054305525 +0000 UTC m=+1415.284651427" Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.079873 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lqzk6" podStartSLOduration=2.079849025 podStartE2EDuration="2.079849025s" podCreationTimestamp="2026-01-21 07:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:54.068087914 +0000 UTC m=+1415.298433806" watchObservedRunningTime="2026-01-21 07:17:54.079849025 +0000 UTC m=+1415.310194927" Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.544832 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5653-account-create-update-7rtr9" podStartSLOduration=2.544803022 podStartE2EDuration="2.544803022s" podCreationTimestamp="2026-01-21 07:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:17:54.121333827 +0000 UTC m=+1415.351679729" watchObservedRunningTime="2026-01-21 07:17:54.544803022 +0000 UTC m=+1415.775148934" Jan 21 07:17:54 crc kubenswrapper[4893]: I0121 07:17:54.560015 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-22df-account-create-update-qdjbd"] Jan 21 07:17:54 crc kubenswrapper[4893]: W0121 07:17:54.563995 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828f06a8_358b_486a_9339_520cba2baf52.slice/crio-3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360 WatchSource:0}: Error finding container 3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360: Status 404 returned error can't find the container with id 3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360 Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.059600 4893 generic.go:334] "Generic (PLEG): container finished" podID="552962c7-46c0-4f3f-826e-3c99b06f6c61" containerID="97ddb3d225615594c9a605add1044c2cb99f516ef5f324d2b45eb06de4cb1505" exitCode=0 Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.059729 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqzk6" event={"ID":"552962c7-46c0-4f3f-826e-3c99b06f6c61","Type":"ContainerDied","Data":"97ddb3d225615594c9a605add1044c2cb99f516ef5f324d2b45eb06de4cb1505"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.063747 4893 generic.go:334] "Generic (PLEG): container finished" podID="f3284c32-3995-4e0e-a6ee-15a79317eaab" containerID="5cf697339569b6905b4af0edeaed0b9a8480bee6dfdf516cd425bdcc946ee1f5" exitCode=0 Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.063948 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-bkdjc" event={"ID":"f3284c32-3995-4e0e-a6ee-15a79317eaab","Type":"ContainerDied","Data":"5cf697339569b6905b4af0edeaed0b9a8480bee6dfdf516cd425bdcc946ee1f5"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.065744 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx97" event={"ID":"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa","Type":"ContainerStarted","Data":"4753b38a6732fb0ac9424a2d22e75d9b2eaa0d229b73057f9c895b0a284de4ba"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.068501 4893 generic.go:334] "Generic (PLEG): container finished" podID="828f06a8-358b-486a-9339-520cba2baf52" containerID="312c17d7007a17dc75ca6f883b3966c3666f60753c3b5e166435d88ffac4eb4d" exitCode=0 Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.068569 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22df-account-create-update-qdjbd" event={"ID":"828f06a8-358b-486a-9339-520cba2baf52","Type":"ContainerDied","Data":"312c17d7007a17dc75ca6f883b3966c3666f60753c3b5e166435d88ffac4eb4d"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.068598 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22df-account-create-update-qdjbd" event={"ID":"828f06a8-358b-486a-9339-520cba2baf52","Type":"ContainerStarted","Data":"3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.071641 4893 generic.go:334] "Generic (PLEG): container finished" podID="6be46ad8-4b10-4fdf-8c34-e9003a28acbd" containerID="98f97492814f86a56e1b1582cbf2a87660019e0cb9f8aa9789265d7d90cf2c62" exitCode=0 Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.072094 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-7rtr9" event={"ID":"6be46ad8-4b10-4fdf-8c34-e9003a28acbd","Type":"ContainerDied","Data":"98f97492814f86a56e1b1582cbf2a87660019e0cb9f8aa9789265d7d90cf2c62"} Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.139329 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bwx97" podStartSLOduration=1.9063659880000001 podStartE2EDuration="31.1392926s" podCreationTimestamp="2026-01-21 07:17:24 +0000 UTC" firstStartedPulling="2026-01-21 07:17:24.911427281 +0000 UTC m=+1386.141773183" lastFinishedPulling="2026-01-21 07:17:54.144353893 +0000 UTC m=+1415.374699795" observedRunningTime="2026-01-21 07:17:55.112604577 +0000 UTC m=+1416.342950489" watchObservedRunningTime="2026-01-21 07:17:55.1392926 +0000 UTC m=+1416.369638502" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.528179 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.535101 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.720782 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8248t\" (UniqueName: \"kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t\") pod \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.720851 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts\") pod \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\" (UID: \"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a\") " Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.720885 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9m2t\" (UniqueName: \"kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t\") pod \"b624b5ac-d2e6-442d-8411-656210764688\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.720902 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts\") pod \"b624b5ac-d2e6-442d-8411-656210764688\" (UID: \"b624b5ac-d2e6-442d-8411-656210764688\") " Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.721791 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b624b5ac-d2e6-442d-8411-656210764688" (UID: "b624b5ac-d2e6-442d-8411-656210764688"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.721852 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" (UID: "0b5b0846-7bdc-4019-bc94-ea4253cc9c8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.723289 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.723599 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b624b5ac-d2e6-442d-8411-656210764688-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.743521 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t" (OuterVolumeSpecName: "kube-api-access-s9m2t") pod "b624b5ac-d2e6-442d-8411-656210764688" (UID: "b624b5ac-d2e6-442d-8411-656210764688"). InnerVolumeSpecName "kube-api-access-s9m2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.747893 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t" (OuterVolumeSpecName: "kube-api-access-8248t") pod "0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" (UID: "0b5b0846-7bdc-4019-bc94-ea4253cc9c8a"). InnerVolumeSpecName "kube-api-access-8248t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.824631 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8248t\" (UniqueName: \"kubernetes.io/projected/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a-kube-api-access-8248t\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:55 crc kubenswrapper[4893]: I0121 07:17:55.824695 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9m2t\" (UniqueName: \"kubernetes.io/projected/b624b5ac-d2e6-442d-8411-656210764688-kube-api-access-s9m2t\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.082790 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lzdts" Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.083076 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lzdts" event={"ID":"b624b5ac-d2e6-442d-8411-656210764688","Type":"ContainerDied","Data":"1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc"} Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.083148 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e1985b2978d1c5f444fa732bb5eaff5b6e5649e71c86438e2666ee0d094d9fc" Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.093654 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t7q6q" Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.093740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t7q6q" event={"ID":"0b5b0846-7bdc-4019-bc94-ea4253cc9c8a","Type":"ContainerDied","Data":"5f4cf89c89b0da285654db8505294ce348b0a61d9a4539997a41751538e65634"} Jan 21 07:17:56 crc kubenswrapper[4893]: I0121 07:17:56.093786 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f4cf89c89b0da285654db8505294ce348b0a61d9a4539997a41751538e65634" Jan 21 07:17:58 crc kubenswrapper[4893]: I0121 07:17:58.656890 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:17:58 crc kubenswrapper[4893]: I0121 07:17:58.657422 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.052190 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.071324 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.101995 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.219938 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts\") pod \"828f06a8-358b-486a-9339-520cba2baf52\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.220072 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khlwb\" (UniqueName: \"kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb\") pod \"552962c7-46c0-4f3f-826e-3c99b06f6c61\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.220105 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj9jm\" (UniqueName: \"kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm\") pod \"f3284c32-3995-4e0e-a6ee-15a79317eaab\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.220162 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl\") pod \"828f06a8-358b-486a-9339-520cba2baf52\" (UID: \"828f06a8-358b-486a-9339-520cba2baf52\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.220219 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts\") pod \"f3284c32-3995-4e0e-a6ee-15a79317eaab\" (UID: \"f3284c32-3995-4e0e-a6ee-15a79317eaab\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.220279 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts\") pod \"552962c7-46c0-4f3f-826e-3c99b06f6c61\" (UID: \"552962c7-46c0-4f3f-826e-3c99b06f6c61\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.224226 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "828f06a8-358b-486a-9339-520cba2baf52" (UID: "828f06a8-358b-486a-9339-520cba2baf52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.225182 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.225273 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3284c32-3995-4e0e-a6ee-15a79317eaab" (UID: "f3284c32-3995-4e0e-a6ee-15a79317eaab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.225537 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "552962c7-46c0-4f3f-826e-3c99b06f6c61" (UID: "552962c7-46c0-4f3f-826e-3c99b06f6c61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.225580 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl" (OuterVolumeSpecName: "kube-api-access-kxghl") pod "828f06a8-358b-486a-9339-520cba2baf52" (UID: "828f06a8-358b-486a-9339-520cba2baf52"). InnerVolumeSpecName "kube-api-access-kxghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.226156 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm" (OuterVolumeSpecName: "kube-api-access-dj9jm") pod "f3284c32-3995-4e0e-a6ee-15a79317eaab" (UID: "f3284c32-3995-4e0e-a6ee-15a79317eaab"). InnerVolumeSpecName "kube-api-access-dj9jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.243635 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb" (OuterVolumeSpecName: "kube-api-access-khlwb") pod "552962c7-46c0-4f3f-826e-3c99b06f6c61" (UID: "552962c7-46c0-4f3f-826e-3c99b06f6c61"). InnerVolumeSpecName "kube-api-access-khlwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.300618 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22df-account-create-update-qdjbd" event={"ID":"828f06a8-358b-486a-9339-520cba2baf52","Type":"ContainerDied","Data":"3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360"} Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.301023 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6426955b98f5e3473f9b307f080a4e6abfaab269f628ef907b86fde8c89360" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.301156 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-qdjbd" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.302767 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-7rtr9" event={"ID":"6be46ad8-4b10-4fdf-8c34-e9003a28acbd","Type":"ContainerDied","Data":"c0c7ccbf98064e30266cdbe88566f42d42d6222fa5376cff9ab959f5fff23b0a"} Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.302811 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c7ccbf98064e30266cdbe88566f42d42d6222fa5376cff9ab959f5fff23b0a" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.302813 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-7rtr9" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.304601 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d9tjm" event={"ID":"3ba738c9-29c6-492d-acdc-0854042df9dc","Type":"ContainerStarted","Data":"3e583830442cadc155ced0fc3acf18ce7212107354521b1fadfeae0a106231bc"} Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.306145 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqzk6" event={"ID":"552962c7-46c0-4f3f-826e-3c99b06f6c61","Type":"ContainerDied","Data":"f77eebdb48421742f098d333ae240cd18c66a9017d4744e1d160524a2c775fd1"} Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.306169 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqzk6" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.306192 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77eebdb48421742f098d333ae240cd18c66a9017d4744e1d160524a2c775fd1" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.308976 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-bkdjc" event={"ID":"f3284c32-3995-4e0e-a6ee-15a79317eaab","Type":"ContainerDied","Data":"9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c"} Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.309008 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec6cfce9ba44d35340f9ab55dd8c0876a8f80d64eba1b456927d3f2cddab11c" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.309038 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-bkdjc" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.324250 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-d9tjm" podStartSLOduration=2.16148673 podStartE2EDuration="7.324210791s" podCreationTimestamp="2026-01-21 07:17:52 +0000 UTC" firstStartedPulling="2026-01-21 07:17:53.720328212 +0000 UTC m=+1414.950674114" lastFinishedPulling="2026-01-21 07:17:58.883052243 +0000 UTC m=+1420.113398175" observedRunningTime="2026-01-21 07:17:59.323397567 +0000 UTC m=+1420.553743469" watchObservedRunningTime="2026-01-21 07:17:59.324210791 +0000 UTC m=+1420.554556693" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.324639 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts\") pod \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.324779 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6kkx\" (UniqueName: \"kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx\") pod \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\" (UID: \"6be46ad8-4b10-4fdf-8c34-e9003a28acbd\") " Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.325856 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6be46ad8-4b10-4fdf-8c34-e9003a28acbd" (UID: "6be46ad8-4b10-4fdf-8c34-e9003a28acbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326389 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/828f06a8-358b-486a-9339-520cba2baf52-kube-api-access-kxghl\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326418 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3284c32-3995-4e0e-a6ee-15a79317eaab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326432 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/552962c7-46c0-4f3f-826e-3c99b06f6c61-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326446 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/828f06a8-358b-486a-9339-520cba2baf52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326458 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khlwb\" (UniqueName: \"kubernetes.io/projected/552962c7-46c0-4f3f-826e-3c99b06f6c61-kube-api-access-khlwb\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326469 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj9jm\" (UniqueName: \"kubernetes.io/projected/f3284c32-3995-4e0e-a6ee-15a79317eaab-kube-api-access-dj9jm\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.326481 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.334330 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx" (OuterVolumeSpecName: "kube-api-access-x6kkx") pod "6be46ad8-4b10-4fdf-8c34-e9003a28acbd" (UID: "6be46ad8-4b10-4fdf-8c34-e9003a28acbd"). InnerVolumeSpecName "kube-api-access-x6kkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.427216 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6kkx\" (UniqueName: \"kubernetes.io/projected/6be46ad8-4b10-4fdf-8c34-e9003a28acbd-kube-api-access-x6kkx\") on node \"crc\" DevicePath \"\"" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.744871 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.804759 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:17:59 crc kubenswrapper[4893]: I0121 07:17:59.805652 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="dnsmasq-dns" containerID="cri-o://61b59bbe45a6257a84294e3931fa1c34e15d8872b46ddbb3cdf0c4060438c2d7" gracePeriod=10 Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.324615 4893 generic.go:334] "Generic (PLEG): container finished" podID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerID="61b59bbe45a6257a84294e3931fa1c34e15d8872b46ddbb3cdf0c4060438c2d7" exitCode=0 Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.324707 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" event={"ID":"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9","Type":"ContainerDied","Data":"61b59bbe45a6257a84294e3931fa1c34e15d8872b46ddbb3cdf0c4060438c2d7"} Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.495958 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.656004 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc446\" (UniqueName: \"kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446\") pod \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.656070 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config\") pod \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.656210 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc\") pod \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.656268 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb\") pod \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.656309 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb\") pod \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\" (UID: \"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9\") " Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.661532 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446" (OuterVolumeSpecName: "kube-api-access-xc446") pod "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" (UID: "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9"). InnerVolumeSpecName "kube-api-access-xc446". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.701989 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" (UID: "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.706604 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" (UID: "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.721727 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config" (OuterVolumeSpecName: "config") pod "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" (UID: "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.723595 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" (UID: "0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.758368 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.758402 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.758412 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc446\" (UniqueName: \"kubernetes.io/projected/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-kube-api-access-xc446\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.758425 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:00 crc kubenswrapper[4893]: I0121 07:18:00.758434 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.374982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" event={"ID":"0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9","Type":"ContainerDied","Data":"6a6afe3eb65091176eb9d36a8db9bc2c14ad5335b5982e1d983351e2024cfd0f"} Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.375081 4893 scope.go:117] "RemoveContainer" containerID="61b59bbe45a6257a84294e3931fa1c34e15d8872b46ddbb3cdf0c4060438c2d7" Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.375250 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-6rcdb" Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.411422 4893 scope.go:117] "RemoveContainer" containerID="8f8af5b9407014ae9636590d0a04ce6566a90295eb96e378aabe705dfc7f2f6d" Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.431515 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.440972 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-6rcdb"] Jan 21 07:18:01 crc kubenswrapper[4893]: I0121 07:18:01.593469 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" path="/var/lib/kubelet/pods/0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9/volumes" Jan 21 07:18:02 crc kubenswrapper[4893]: E0121 07:18:02.471947 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ba738c9_29c6_492d_acdc_0854042df9dc.slice/crio-conmon-3e583830442cadc155ced0fc3acf18ce7212107354521b1fadfeae0a106231bc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:18:03 crc kubenswrapper[4893]: I0121 07:18:03.397523 4893 generic.go:334] "Generic (PLEG): container finished" podID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" containerID="4753b38a6732fb0ac9424a2d22e75d9b2eaa0d229b73057f9c895b0a284de4ba" exitCode=0 Jan 21 07:18:03 crc kubenswrapper[4893]: I0121 07:18:03.397618 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx97" event={"ID":"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa","Type":"ContainerDied","Data":"4753b38a6732fb0ac9424a2d22e75d9b2eaa0d229b73057f9c895b0a284de4ba"} Jan 21 07:18:03 crc kubenswrapper[4893]: I0121 07:18:03.399961 4893 generic.go:334] "Generic (PLEG): container finished" podID="3ba738c9-29c6-492d-acdc-0854042df9dc" containerID="3e583830442cadc155ced0fc3acf18ce7212107354521b1fadfeae0a106231bc" exitCode=0 Jan 21 07:18:03 crc kubenswrapper[4893]: I0121 07:18:03.400028 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d9tjm" event={"ID":"3ba738c9-29c6-492d-acdc-0854042df9dc","Type":"ContainerDied","Data":"3e583830442cadc155ced0fc3acf18ce7212107354521b1fadfeae0a106231bc"} Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.736986 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.869291 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle\") pod \"3ba738c9-29c6-492d-acdc-0854042df9dc\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.869437 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sc4s\" (UniqueName: \"kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s\") pod \"3ba738c9-29c6-492d-acdc-0854042df9dc\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.869472 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data\") pod \"3ba738c9-29c6-492d-acdc-0854042df9dc\" (UID: \"3ba738c9-29c6-492d-acdc-0854042df9dc\") " Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.877428 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s" (OuterVolumeSpecName: "kube-api-access-5sc4s") pod "3ba738c9-29c6-492d-acdc-0854042df9dc" (UID: "3ba738c9-29c6-492d-acdc-0854042df9dc"). InnerVolumeSpecName "kube-api-access-5sc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.880150 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx97" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.919836 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ba738c9-29c6-492d-acdc-0854042df9dc" (UID: "3ba738c9-29c6-492d-acdc-0854042df9dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.924362 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data" (OuterVolumeSpecName: "config-data") pod "3ba738c9-29c6-492d-acdc-0854042df9dc" (UID: "3ba738c9-29c6-492d-acdc-0854042df9dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.970926 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sc4s\" (UniqueName: \"kubernetes.io/projected/3ba738c9-29c6-492d-acdc-0854042df9dc-kube-api-access-5sc4s\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.970959 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:04 crc kubenswrapper[4893]: I0121 07:18:04.970969 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ba738c9-29c6-492d-acdc-0854042df9dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.073551 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data\") pod \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.073614 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle\") pod \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.073767 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data\") pod \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.073892 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ct25\" (UniqueName: \"kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25\") pod \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\" (UID: \"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa\") " Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.077071 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25" (OuterVolumeSpecName: "kube-api-access-8ct25") pod "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" (UID: "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa"). InnerVolumeSpecName "kube-api-access-8ct25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.078131 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" (UID: "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.092652 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" (UID: "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.107976 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data" (OuterVolumeSpecName: "config-data") pod "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" (UID: "fff7b3eb-e8c3-4d58-932b-3738b1e8dffa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.175727 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ct25\" (UniqueName: \"kubernetes.io/projected/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-kube-api-access-8ct25\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.175768 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.175781 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.175864 4893 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.419244 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d9tjm" event={"ID":"3ba738c9-29c6-492d-acdc-0854042df9dc","Type":"ContainerDied","Data":"4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c"} Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.419557 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccd81f1571da436a2915861d1e4bea7f6691a4a24f2f532f06eb6045e4d7b8c" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.419278 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d9tjm" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.421096 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx97" event={"ID":"fff7b3eb-e8c3-4d58-932b-3738b1e8dffa","Type":"ContainerDied","Data":"a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624"} Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.421128 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ac94139ecece615d391f1f9904f05b71f88cd1b6b44013e9ee7c6f8c3c0624" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.421126 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx97" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924014 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924407 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828f06a8-358b-486a-9339-520cba2baf52" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924420 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="828f06a8-358b-486a-9339-520cba2baf52" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924432 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b624b5ac-d2e6-442d-8411-656210764688" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924438 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b624b5ac-d2e6-442d-8411-656210764688" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924452 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be46ad8-4b10-4fdf-8c34-e9003a28acbd" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924459 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be46ad8-4b10-4fdf-8c34-e9003a28acbd" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924469 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552962c7-46c0-4f3f-826e-3c99b06f6c61" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924474 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="552962c7-46c0-4f3f-826e-3c99b06f6c61" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924487 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="init" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924493 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="init" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924510 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3284c32-3995-4e0e-a6ee-15a79317eaab" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924516 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3284c32-3995-4e0e-a6ee-15a79317eaab" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924528 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba738c9-29c6-492d-acdc-0854042df9dc" containerName="keystone-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924534 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba738c9-29c6-492d-acdc-0854042df9dc" containerName="keystone-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924548 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" containerName="glance-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924553 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" containerName="glance-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924564 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924572 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: E0121 07:18:05.924581 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="dnsmasq-dns" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924587 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="dnsmasq-dns" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924841 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3284c32-3995-4e0e-a6ee-15a79317eaab" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924856 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="552962c7-46c0-4f3f-826e-3c99b06f6c61" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924864 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="828f06a8-358b-486a-9339-520cba2baf52" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924872 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc0b4f1-9681-4ced-8f2d-c67fbbeca8b9" containerName="dnsmasq-dns" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924882 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924893 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" containerName="glance-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924900 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba738c9-29c6-492d-acdc-0854042df9dc" containerName="keystone-db-sync" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924908 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be46ad8-4b10-4fdf-8c34-e9003a28acbd" containerName="mariadb-account-create-update" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.924916 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b624b5ac-d2e6-442d-8411-656210764688" containerName="mariadb-database-create" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.925903 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.952311 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.992589 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-chp5k"] Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.994224 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.999728 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 07:18:05 crc kubenswrapper[4893]: I0121 07:18:05.999969 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.000093 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.000138 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9b7vz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.002175 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-chp5k"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.003422 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005346 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005459 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005583 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005641 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005697 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005732 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005816 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.005963 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fq7\" (UniqueName: \"kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.006002 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.006030 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.006067 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.006107 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8wp\" (UniqueName: \"kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106748 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106798 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106825 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106863 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fq7\" (UniqueName: \"kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106918 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106939 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106967 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.106990 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8wp\" (UniqueName: \"kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.107038 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.107089 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.107145 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.108747 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.108785 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.109713 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.110105 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.114315 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.115236 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.126328 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.126592 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.127124 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.131254 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.138502 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8wp\" (UniqueName: \"kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp\") pod \"dnsmasq-dns-767d96458c-4g2kt\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.170354 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fq7\" (UniqueName: \"kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7\") pod \"keystone-bootstrap-chp5k\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.207648 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wrnx6"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.209050 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.225406 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nzqck" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.225817 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.225974 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.238256 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrnx6"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.250347 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.274889 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313764 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313834 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64976\" (UniqueName: \"kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313869 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313908 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313937 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.313984 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.329006 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-m2n6w"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.330927 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.333721 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.337051 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l8jtn" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.337089 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.337454 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.346421 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ktqbh"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.347970 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.353771 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q4h9m"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.355192 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.355286 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.355344 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.355520 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zfbzz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.375432 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m2n6w"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.408749 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qffvz"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.410549 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.415485 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.415561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.415624 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.415646 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.423628 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.423743 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.423812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64976\" (UniqueName: \"kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.431941 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.435889 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.439274 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fbjn2" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.445258 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.446554 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.477257 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ktqbh"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.487128 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64976\" (UniqueName: \"kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.494732 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q4h9m"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.498010 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data\") pod \"cinder-db-sync-wrnx6\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526108 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526189 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526225 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526436 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lcb\" (UniqueName: \"kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526491 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.526526 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.530779 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.530916 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.530985 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531033 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531063 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531110 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8tkf\" (UniqueName: \"kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531185 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcd8j\" (UniqueName: \"kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531228 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.531274 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdfr\" (UniqueName: \"kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.546904 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.547050 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.551852 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qffvz"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.565889 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706053 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcd8j\" (UniqueName: \"kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706125 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706162 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdfr\" (UniqueName: \"kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706196 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706215 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706287 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706323 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706343 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706368 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2lcb\" (UniqueName: \"kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706455 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706491 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706527 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706576 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706600 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706628 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.706696 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8tkf\" (UniqueName: \"kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.709498 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.712063 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.712333 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.713400 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.715487 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.716078 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.725427 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.726260 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.726494 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.727728 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.735909 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.737016 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8tkf\" (UniqueName: \"kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.739193 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.749236 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdfr\" (UniqueName: \"kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr\") pod \"barbican-db-sync-qffvz\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.750280 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle\") pod \"placement-db-sync-ktqbh\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.755740 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q4h9m"] Jan 21 07:18:06 crc kubenswrapper[4893]: E0121 07:18:06.756463 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hcd8j], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" podUID="cf9f805e-56e2-4faf-a5ce-ead1041ef589" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.767570 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2lcb\" (UniqueName: \"kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb\") pod \"neutron-db-sync-m2n6w\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.781665 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.787832 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.787896 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcd8j\" (UniqueName: \"kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j\") pod \"dnsmasq-dns-5fdbfbc95f-q4h9m\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.790996 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.794804 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.794948 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.816318 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.817869 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqthd\" (UniqueName: \"kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.817981 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.818328 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.818407 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.818428 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.818456 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.818607 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.868770 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.873197 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.883042 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.949658 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.953510 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.953915 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954006 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954124 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsv4g\" (UniqueName: \"kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954208 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954269 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954529 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954737 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954778 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqthd\" (UniqueName: \"kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954814 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954862 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.954892 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.955201 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.957165 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.959406 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.960864 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.966897 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.968709 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.980393 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqthd\" (UniqueName: \"kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd\") pod \"ceilometer-0\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " pod="openstack/ceilometer-0" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.980727 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:06 crc kubenswrapper[4893]: I0121 07:18:06.996967 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061026 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061404 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061523 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061588 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsv4g\" (UniqueName: \"kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061836 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.061869 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.062783 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.063291 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.064088 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.064107 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.064746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.086166 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsv4g\" (UniqueName: \"kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g\") pod \"dnsmasq-dns-6f6f8cb849-zvqkt\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.152566 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.254747 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.256454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.259089 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.267660 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.267702 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.267992 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v2k8v" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.273715 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-chp5k"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.275958 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:07 crc kubenswrapper[4893]: W0121 07:18:07.276236 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16af4454_341e_4a5b_8203_0514725f1cbe.slice/crio-c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf WatchSource:0}: Error finding container c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf: Status 404 returned error can't find the container with id c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.298981 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.368963 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369287 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lnz\" (UniqueName: \"kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369318 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369392 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369453 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369487 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.369526 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.424520 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wrnx6"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.466598 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471246 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lnz\" (UniqueName: \"kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471308 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471432 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471472 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471520 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.471569 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.472116 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.472194 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.473019 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.475802 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.478523 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.485002 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.488903 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.491981 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.497103 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chp5k" event={"ID":"16af4454-341e-4a5b-8203-0514725f1cbe","Type":"ContainerStarted","Data":"c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf"} Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.506200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrnx6" event={"ID":"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619","Type":"ContainerStarted","Data":"dfc7e157f158e3ce60226ca97fc4982b9d7209482bad9a0b71d5a1ee31c86d25"} Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.515203 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lnz\" (UniqueName: \"kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.531159 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.537147 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.537429 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" event={"ID":"697c9051-8a48-42e4-bd9d-e6cf287869a9","Type":"ContainerStarted","Data":"84a61e3b9f9b0fcae65783a6901d84e624efe7e51ff5ec0b53d281a754e8ac2b"} Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.539482 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.572965 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpchw\" (UniqueName: \"kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573032 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573098 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573135 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573180 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573258 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.573391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.578782 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qffvz"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.586264 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.604186 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m2n6w"] Jan 21 07:18:07 crc kubenswrapper[4893]: W0121 07:18:07.615364 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae6eb33_b5b8_4ed9_a227_b96f365a49a3.slice/crio-f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38 WatchSource:0}: Error finding container f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38: Status 404 returned error can't find the container with id f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38 Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.625934 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.627229 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.655990 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ktqbh"] Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674180 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674305 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcd8j\" (UniqueName: \"kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674341 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674412 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674491 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.674796 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb\") pod \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\" (UID: \"cf9f805e-56e2-4faf-a5ce-ead1041ef589\") " Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675534 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675590 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpchw\" (UniqueName: \"kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675658 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675761 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675803 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.675893 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.678031 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.679386 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.679742 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.680122 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.680893 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config" (OuterVolumeSpecName: "config") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.681764 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.682527 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.683339 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j" (OuterVolumeSpecName: "kube-api-access-hcd8j") pod "cf9f805e-56e2-4faf-a5ce-ead1041ef589" (UID: "cf9f805e-56e2-4faf-a5ce-ead1041ef589"). InnerVolumeSpecName "kube-api-access-hcd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.685994 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.689433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.690202 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.690552 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.713086 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpchw\" (UniqueName: \"kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.762735 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790755 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790787 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790795 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790807 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790817 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcd8j\" (UniqueName: \"kubernetes.io/projected/cf9f805e-56e2-4faf-a5ce-ead1041ef589-kube-api-access-hcd8j\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.790825 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf9f805e-56e2-4faf-a5ce-ead1041ef589-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:07 crc kubenswrapper[4893]: I0121 07:18:07.821575 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.191813 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.577685 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktqbh" event={"ID":"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe","Type":"ContainerStarted","Data":"4d0f1aea554b54a0a7ae76b8fd8f57cd0f4e5b574c54eaad767d70dfd846a432"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.602297 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.612598 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerStarted","Data":"cd946dadd2c547b19b1c419054a0df8bf9ac0fae659eced9d7442cabb16fe2f3"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.626425 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chp5k" event={"ID":"16af4454-341e-4a5b-8203-0514725f1cbe","Type":"ContainerStarted","Data":"7733667fd7c8cd7e55ff97256302b135ad30aee18a32d7e538c9ea12108ae3b1"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.635945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qffvz" event={"ID":"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3","Type":"ContainerStarted","Data":"f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.654698 4893 generic.go:334] "Generic (PLEG): container finished" podID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerID="f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6" exitCode=0 Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.654770 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" event={"ID":"a7f876d3-b77c-46b7-98da-23948f79fd05","Type":"ContainerDied","Data":"f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.654797 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" event={"ID":"a7f876d3-b77c-46b7-98da-23948f79fd05","Type":"ContainerStarted","Data":"c8959effe56217d2a7dce3d9e655e5ab095256ccf7c718c49ff3afcd9040de10"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.663689 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m2n6w" event={"ID":"15bb49fe-ded6-45cb-b094-05da46c3f9e8","Type":"ContainerStarted","Data":"0caf1583fe45f9478ec6a759fa630eecb7221966a84a7d22d76435c4e7d1fba1"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.663745 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m2n6w" event={"ID":"15bb49fe-ded6-45cb-b094-05da46c3f9e8","Type":"ContainerStarted","Data":"6de305cc6b456bebb453a123d4c0fd98b56145224b1c188c53ada3bf5982660d"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.669173 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-chp5k" podStartSLOduration=3.669148763 podStartE2EDuration="3.669148763s" podCreationTimestamp="2026-01-21 07:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:08.646435883 +0000 UTC m=+1429.876781785" watchObservedRunningTime="2026-01-21 07:18:08.669148763 +0000 UTC m=+1429.899494665" Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.694188 4893 generic.go:334] "Generic (PLEG): container finished" podID="697c9051-8a48-42e4-bd9d-e6cf287869a9" containerID="619c22d44ffffc6c7f872c4266634c8b6dc059356b2cb0c6592e2a0a2f78d411" exitCode=0 Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.694271 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-q4h9m" Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.694880 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" event={"ID":"697c9051-8a48-42e4-bd9d-e6cf287869a9","Type":"ContainerDied","Data":"619c22d44ffffc6c7f872c4266634c8b6dc059356b2cb0c6592e2a0a2f78d411"} Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.719395 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-m2n6w" podStartSLOduration=2.719369151 podStartE2EDuration="2.719369151s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:08.716762586 +0000 UTC m=+1429.947108488" watchObservedRunningTime="2026-01-21 07:18:08.719369151 +0000 UTC m=+1429.949715053" Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.837735 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.909246 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q4h9m"] Jan 21 07:18:08 crc kubenswrapper[4893]: I0121 07:18:08.917220 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-q4h9m"] Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.221371 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298504 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298627 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298713 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv8wp\" (UniqueName: \"kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298770 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298820 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.298843 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc\") pod \"697c9051-8a48-42e4-bd9d-e6cf287869a9\" (UID: \"697c9051-8a48-42e4-bd9d-e6cf287869a9\") " Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.308756 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp" (OuterVolumeSpecName: "kube-api-access-pv8wp") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "kube-api-access-pv8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.328475 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.355744 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.357878 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.358926 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config" (OuterVolumeSpecName: "config") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.365229 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "697c9051-8a48-42e4-bd9d-e6cf287869a9" (UID: "697c9051-8a48-42e4-bd9d-e6cf287869a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494884 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv8wp\" (UniqueName: \"kubernetes.io/projected/697c9051-8a48-42e4-bd9d-e6cf287869a9-kube-api-access-pv8wp\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494934 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494947 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494958 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494969 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.494978 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697c9051-8a48-42e4-bd9d-e6cf287869a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.593340 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9f805e-56e2-4faf-a5ce-ead1041ef589" path="/var/lib/kubelet/pods/cf9f805e-56e2-4faf-a5ce-ead1041ef589/volumes" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.718494 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerStarted","Data":"f258733740344b4f8ff62097851f4a6bdf0d25f46cc758c19c2149a8d42db050"} Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.733350 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.733359 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-4g2kt" event={"ID":"697c9051-8a48-42e4-bd9d-e6cf287869a9","Type":"ContainerDied","Data":"84a61e3b9f9b0fcae65783a6901d84e624efe7e51ff5ec0b53d281a754e8ac2b"} Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.733498 4893 scope.go:117] "RemoveContainer" containerID="619c22d44ffffc6c7f872c4266634c8b6dc059356b2cb0c6592e2a0a2f78d411" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.749873 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" event={"ID":"a7f876d3-b77c-46b7-98da-23948f79fd05","Type":"ContainerStarted","Data":"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618"} Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.751808 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.759958 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerStarted","Data":"c97b7f97a6a141a4f3367501823ed09ff45e6a1c050169dee6b90b1628c97c33"} Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.792416 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" podStartSLOduration=3.792395454 podStartE2EDuration="3.792395454s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:09.778681835 +0000 UTC m=+1431.009027737" watchObservedRunningTime="2026-01-21 07:18:09.792395454 +0000 UTC m=+1431.022741356" Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.864694 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:09 crc kubenswrapper[4893]: I0121 07:18:09.871723 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-4g2kt"] Jan 21 07:18:10 crc kubenswrapper[4893]: I0121 07:18:10.479747 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:10 crc kubenswrapper[4893]: I0121 07:18:10.502564 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:18:10 crc kubenswrapper[4893]: I0121 07:18:10.595593 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:10 crc kubenswrapper[4893]: I0121 07:18:10.780031 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerStarted","Data":"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b"} Jan 21 07:18:10 crc kubenswrapper[4893]: I0121 07:18:10.783377 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerStarted","Data":"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654"} Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.714184 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697c9051-8a48-42e4-bd9d-e6cf287869a9" path="/var/lib/kubelet/pods/697c9051-8a48-42e4-bd9d-e6cf287869a9/volumes" Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.810478 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerStarted","Data":"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4"} Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.810631 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-log" containerID="cri-o://e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" gracePeriod=30 Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.810713 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-httpd" containerID="cri-o://eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" gracePeriod=30 Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.818458 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-log" containerID="cri-o://3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" gracePeriod=30 Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.818560 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerStarted","Data":"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7"} Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.818630 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-httpd" containerID="cri-o://42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" gracePeriod=30 Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.837421 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.8373949750000005 podStartE2EDuration="5.837394975s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:11.829036163 +0000 UTC m=+1433.059382065" watchObservedRunningTime="2026-01-21 07:18:11.837394975 +0000 UTC m=+1433.067740877" Jan 21 07:18:11 crc kubenswrapper[4893]: I0121 07:18:11.858939 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.85890879 podStartE2EDuration="5.85890879s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:11.852949137 +0000 UTC m=+1433.083295059" watchObservedRunningTime="2026-01-21 07:18:11.85890879 +0000 UTC m=+1433.089254692" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.725518 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742179 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742286 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742332 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742360 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742407 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.742440 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8lnz\" (UniqueName: \"kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.743750 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs" (OuterVolumeSpecName: "logs") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.748236 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.749159 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.752409 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz" (OuterVolumeSpecName: "kube-api-access-q8lnz") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "kube-api-access-q8lnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.783696 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.802929 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.843787 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.843857 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844092 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844177 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpchw\" (UniqueName: \"kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844212 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844284 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844309 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data\") pod \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\" (UID: \"f9793c37-f939-4a34-8bb0-8a0466d6f8c1\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844330 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts\") pod \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\" (UID: \"0173c021-90bc-403d-9cc1-d21f06bf9b5c\") " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844691 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs" (OuterVolumeSpecName: "logs") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844743 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8lnz\" (UniqueName: \"kubernetes.io/projected/0173c021-90bc-403d-9cc1-d21f06bf9b5c-kube-api-access-q8lnz\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844784 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844799 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844815 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0173c021-90bc-403d-9cc1-d21f06bf9b5c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.844828 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.845393 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data" (OuterVolumeSpecName: "config-data") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.863790 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw" (OuterVolumeSpecName: "kube-api-access-zpchw") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "kube-api-access-zpchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.863986 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts" (OuterVolumeSpecName: "scripts") pod "0173c021-90bc-403d-9cc1-d21f06bf9b5c" (UID: "0173c021-90bc-403d-9cc1-d21f06bf9b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.864066 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts" (OuterVolumeSpecName: "scripts") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.864989 4893 generic.go:334] "Generic (PLEG): container finished" podID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerID="eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" exitCode=143 Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.865025 4893 generic.go:334] "Generic (PLEG): container finished" podID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerID="e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" exitCode=143 Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.865154 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.865271 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.869717 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerDied","Data":"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.869786 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerDied","Data":"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.869801 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0173c021-90bc-403d-9cc1-d21f06bf9b5c","Type":"ContainerDied","Data":"c97b7f97a6a141a4f3367501823ed09ff45e6a1c050169dee6b90b1628c97c33"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.869822 4893 scope.go:117] "RemoveContainer" containerID="eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884572 4893 generic.go:334] "Generic (PLEG): container finished" podID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerID="42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" exitCode=143 Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884606 4893 generic.go:334] "Generic (PLEG): container finished" podID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerID="3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" exitCode=143 Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884628 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerDied","Data":"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884654 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerDied","Data":"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884680 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f9793c37-f939-4a34-8bb0-8a0466d6f8c1","Type":"ContainerDied","Data":"f258733740344b4f8ff62097851f4a6bdf0d25f46cc758c19c2149a8d42db050"} Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.884751 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.898487 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.913920 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946285 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpchw\" (UniqueName: \"kubernetes.io/projected/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-kube-api-access-zpchw\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946322 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946351 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946364 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946374 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946385 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946395 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946404 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0173c021-90bc-403d-9cc1-d21f06bf9b5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:12 crc kubenswrapper[4893]: I0121 07:18:12.946415 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.038260 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data" (OuterVolumeSpecName: "config-data") pod "f9793c37-f939-4a34-8bb0-8a0466d6f8c1" (UID: "f9793c37-f939-4a34-8bb0-8a0466d6f8c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.039244 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.051882 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.051928 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9793c37-f939-4a34-8bb0-8a0466d6f8c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.062584 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.088791 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.106571 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: E0121 07:18:13.107343 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107401 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: E0121 07:18:13.107428 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107436 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: E0121 07:18:13.107487 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697c9051-8a48-42e4-bd9d-e6cf287869a9" containerName="init" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107497 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="697c9051-8a48-42e4-bd9d-e6cf287869a9" containerName="init" Jan 21 07:18:13 crc kubenswrapper[4893]: E0121 07:18:13.107511 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107518 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: E0121 07:18:13.107568 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107577 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107895 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107950 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107964 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="697c9051-8a48-42e4-bd9d-e6cf287869a9" containerName="init" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107970 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" containerName="glance-log" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.107984 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" containerName="glance-httpd" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.109520 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.116108 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.131310 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.254829 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.254887 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ts5\" (UniqueName: \"kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.254929 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.254947 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.254969 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.255017 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.255050 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.255230 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.280956 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.294105 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.295489 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.302040 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.307088 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357160 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357221 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ts5\" (UniqueName: \"kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357297 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357328 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357358 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357422 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.357472 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.358062 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.359774 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.360090 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.367524 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.374962 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.376271 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.409624 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ts5\" (UniqueName: \"kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.447886 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.460664 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.460986 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.461086 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkww\" (UniqueName: \"kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.461208 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.461283 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.461374 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.461456 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.497406 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.562964 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563021 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563065 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563090 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563213 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563232 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gkww\" (UniqueName: \"kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.563934 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.564129 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.564178 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.568324 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.569132 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.571851 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.592110 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gkww\" (UniqueName: \"kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.598967 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.611652 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0173c021-90bc-403d-9cc1-d21f06bf9b5c" path="/var/lib/kubelet/pods/0173c021-90bc-403d-9cc1-d21f06bf9b5c/volumes" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.612907 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9793c37-f939-4a34-8bb0-8a0466d6f8c1" path="/var/lib/kubelet/pods/f9793c37-f939-4a34-8bb0-8a0466d6f8c1/volumes" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.630552 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.898781 4893 generic.go:334] "Generic (PLEG): container finished" podID="16af4454-341e-4a5b-8203-0514725f1cbe" containerID="7733667fd7c8cd7e55ff97256302b135ad30aee18a32d7e538c9ea12108ae3b1" exitCode=0 Jan 21 07:18:13 crc kubenswrapper[4893]: I0121 07:18:13.898974 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chp5k" event={"ID":"16af4454-341e-4a5b-8203-0514725f1cbe","Type":"ContainerDied","Data":"7733667fd7c8cd7e55ff97256302b135ad30aee18a32d7e538c9ea12108ae3b1"} Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.657614 4893 scope.go:117] "RemoveContainer" containerID="e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.774983 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.923993 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.924081 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.924141 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.924179 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.924269 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.924289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9fq7\" (UniqueName: \"kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7\") pod \"16af4454-341e-4a5b-8203-0514725f1cbe\" (UID: \"16af4454-341e-4a5b-8203-0514725f1cbe\") " Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.930405 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.933243 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.937571 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chp5k" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.938133 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chp5k" event={"ID":"16af4454-341e-4a5b-8203-0514725f1cbe","Type":"ContainerDied","Data":"c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf"} Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.938246 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5132566fd1e11dcf2058ec30a68d7692b454c384e94de90378ea705cbe16ecf" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.948139 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts" (OuterVolumeSpecName: "scripts") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.952300 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7" (OuterVolumeSpecName: "kube-api-access-m9fq7") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "kube-api-access-m9fq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:15 crc kubenswrapper[4893]: I0121 07:18:15.957189 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data" (OuterVolumeSpecName: "config-data") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.022833 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16af4454-341e-4a5b-8203-0514725f1cbe" (UID: "16af4454-341e-4a5b-8203-0514725f1cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025908 4893 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025937 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025947 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025956 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9fq7\" (UniqueName: \"kubernetes.io/projected/16af4454-341e-4a5b-8203-0514725f1cbe-kube-api-access-m9fq7\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025966 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.025974 4893 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16af4454-341e-4a5b-8203-0514725f1cbe-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.036362 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-chp5k"] Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.044575 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-chp5k"] Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.133482 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vjrdh"] Jan 21 07:18:16 crc kubenswrapper[4893]: E0121 07:18:16.134356 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16af4454-341e-4a5b-8203-0514725f1cbe" containerName="keystone-bootstrap" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.134386 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="16af4454-341e-4a5b-8203-0514725f1cbe" containerName="keystone-bootstrap" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.134593 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="16af4454-341e-4a5b-8203-0514725f1cbe" containerName="keystone-bootstrap" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.135340 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.147268 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vjrdh"] Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229199 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229240 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229263 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqgt\" (UniqueName: \"kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229318 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.229449 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345129 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345189 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqgt\" (UniqueName: \"kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345220 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345285 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345311 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.345359 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.353720 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.353945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.354370 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.361532 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.371538 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.375258 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqgt\" (UniqueName: \"kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt\") pod \"keystone-bootstrap-vjrdh\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:16 crc kubenswrapper[4893]: I0121 07:18:16.451049 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.122432 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.172124 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.278542 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.357131 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.357575 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-kt44r" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" containerID="cri-o://7108e349b8317054374147ccd214b5fd604f0c3792e4a548e64c7029b570f254" gracePeriod=10 Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.595007 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16af4454-341e-4a5b-8203-0514725f1cbe" path="/var/lib/kubelet/pods/16af4454-341e-4a5b-8203-0514725f1cbe/volumes" Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.967398 4893 generic.go:334] "Generic (PLEG): container finished" podID="34db2d2f-d623-4567-b27b-12b205e66587" containerID="7108e349b8317054374147ccd214b5fd604f0c3792e4a548e64c7029b570f254" exitCode=0 Jan 21 07:18:17 crc kubenswrapper[4893]: I0121 07:18:17.967453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-kt44r" event={"ID":"34db2d2f-d623-4567-b27b-12b205e66587","Type":"ContainerDied","Data":"7108e349b8317054374147ccd214b5fd604f0c3792e4a548e64c7029b570f254"} Jan 21 07:18:19 crc kubenswrapper[4893]: I0121 07:18:19.744200 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-kt44r" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Jan 21 07:18:28 crc kubenswrapper[4893]: I0121 07:18:28.656492 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:18:28 crc kubenswrapper[4893]: I0121 07:18:28.657218 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:18:28 crc kubenswrapper[4893]: I0121 07:18:28.657304 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:18:28 crc kubenswrapper[4893]: I0121 07:18:28.658099 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:18:28 crc kubenswrapper[4893]: I0121 07:18:28.658147 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03" gracePeriod=600 Jan 21 07:18:29 crc kubenswrapper[4893]: E0121 07:18:29.503932 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 21 07:18:29 crc kubenswrapper[4893]: E0121 07:18:29.504449 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7hd7h55fh654hffhc7h576h54bh564hf6h574hd8h688h65fhb9h597h66dhf9hb5hcbh5dbh5dbhf4hb6h7h58ch6bhbh6h5fch5d8h6bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqthd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(12e11571-a021-4df2-a0da-69f56335a8c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.515804 4893 scope.go:117] "RemoveContainer" containerID="eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" Jan 21 07:18:29 crc kubenswrapper[4893]: E0121 07:18:29.519915 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4\": container with ID starting with eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4 not found: ID does not exist" containerID="eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.519960 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4"} err="failed to get container status \"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4\": rpc error: code = NotFound desc = could not find container \"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4\": container with ID starting with eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4 not found: ID does not exist" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.520015 4893 scope.go:117] "RemoveContainer" containerID="e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" Jan 21 07:18:29 crc kubenswrapper[4893]: E0121 07:18:29.521122 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b\": container with ID starting with e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b not found: ID does not exist" containerID="e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.521203 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b"} err="failed to get container status \"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b\": rpc error: code = NotFound desc = could not find container \"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b\": container with ID starting with e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b not found: ID does not exist" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.521235 4893 scope.go:117] "RemoveContainer" containerID="eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.522034 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4"} err="failed to get container status \"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4\": rpc error: code = NotFound desc = could not find container \"eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4\": container with ID starting with eb12a898ae0e946ee9080c35c4f4bde138117f24c1972357038e7286188366f4 not found: ID does not exist" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.522064 4893 scope.go:117] "RemoveContainer" containerID="e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.522853 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b"} err="failed to get container status \"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b\": rpc error: code = NotFound desc = could not find container \"e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b\": container with ID starting with e4e7969f67b4d413618b35f1b0b72c0292225e31ce070f7e2d780bad9d7eb55b not found: ID does not exist" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.522872 4893 scope.go:117] "RemoveContainer" containerID="42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.596084 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.645823 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.645898 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.645921 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.646103 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.646140 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w88bp\" (UniqueName: \"kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.646173 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb\") pod \"34db2d2f-d623-4567-b27b-12b205e66587\" (UID: \"34db2d2f-d623-4567-b27b-12b205e66587\") " Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.655013 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp" (OuterVolumeSpecName: "kube-api-access-w88bp") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "kube-api-access-w88bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.698408 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.699336 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config" (OuterVolumeSpecName: "config") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.707254 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.729458 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.730140 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34db2d2f-d623-4567-b27b-12b205e66587" (UID: "34db2d2f-d623-4567-b27b-12b205e66587"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.744615 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-kt44r" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748720 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748756 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w88bp\" (UniqueName: \"kubernetes.io/projected/34db2d2f-d623-4567-b27b-12b205e66587-kube-api-access-w88bp\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748771 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748782 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748793 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:29 crc kubenswrapper[4893]: I0121 07:18:29.748803 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34db2d2f-d623-4567-b27b-12b205e66587-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.082568 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-kt44r" event={"ID":"34db2d2f-d623-4567-b27b-12b205e66587","Type":"ContainerDied","Data":"4d13898b9d0cafa0ab23acf1298b85df05549f0a2fea479c59bc25828a5bd49e"} Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.082681 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-kt44r" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.091825 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03" exitCode=0 Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.091875 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03"} Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.115537 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.124423 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-kt44r"] Jan 21 07:18:30 crc kubenswrapper[4893]: E0121 07:18:30.653192 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 21 07:18:30 crc kubenswrapper[4893]: E0121 07:18:30.653628 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64976,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wrnx6_openstack(5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 07:18:30 crc kubenswrapper[4893]: E0121 07:18:30.655088 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wrnx6" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.698084 4893 scope.go:117] "RemoveContainer" containerID="3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.814071 4893 scope.go:117] "RemoveContainer" containerID="42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" Jan 21 07:18:30 crc kubenswrapper[4893]: E0121 07:18:30.815096 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7\": container with ID starting with 42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7 not found: ID does not exist" containerID="42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.815142 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7"} err="failed to get container status \"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7\": rpc error: code = NotFound desc = could not find container \"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7\": container with ID starting with 42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7 not found: ID does not exist" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.815167 4893 scope.go:117] "RemoveContainer" containerID="3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" Jan 21 07:18:30 crc kubenswrapper[4893]: E0121 07:18:30.815607 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654\": container with ID starting with 3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654 not found: ID does not exist" containerID="3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.815653 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654"} err="failed to get container status \"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654\": rpc error: code = NotFound desc = could not find container \"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654\": container with ID starting with 3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654 not found: ID does not exist" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.815703 4893 scope.go:117] "RemoveContainer" containerID="42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.816100 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7"} err="failed to get container status \"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7\": rpc error: code = NotFound desc = could not find container \"42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7\": container with ID starting with 42f0c27dfd1a3a0907583ac0a46c979184952882d080079bfceebf3287e24ed7 not found: ID does not exist" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.816118 4893 scope.go:117] "RemoveContainer" containerID="3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.816460 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654"} err="failed to get container status \"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654\": rpc error: code = NotFound desc = could not find container \"3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654\": container with ID starting with 3ce8ad622aa97f59b2421b9234cda238760c521b915e003f2148ad1e06580654 not found: ID does not exist" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.816484 4893 scope.go:117] "RemoveContainer" containerID="7108e349b8317054374147ccd214b5fd604f0c3792e4a548e64c7029b570f254" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.887861 4893 scope.go:117] "RemoveContainer" containerID="033bb8d5f8aaa247363d6b97db925066e80a3d14612e7799b689c5b2a0a5b7a4" Jan 21 07:18:30 crc kubenswrapper[4893]: I0121 07:18:30.919583 4893 scope.go:117] "RemoveContainer" containerID="26379b5a1ea652b4b0eaaa44c1d6ace582f5cd3b0ef70a04e9f969f2f0e8a7a2" Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.106101 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktqbh" event={"ID":"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe","Type":"ContainerStarted","Data":"42fb31863a351beff5a671105b6d00013cbdcea51ca1264d353f781aeb836f3b"} Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.110797 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qffvz" event={"ID":"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3","Type":"ContainerStarted","Data":"0130a18b27166f791112ff30f58560d319f90531ebbeae689609a6794f09b9c7"} Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.206845 4893 generic.go:334] "Generic (PLEG): container finished" podID="15bb49fe-ded6-45cb-b094-05da46c3f9e8" containerID="0caf1583fe45f9478ec6a759fa630eecb7221966a84a7d22d76435c4e7d1fba1" exitCode=0 Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.207083 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m2n6w" event={"ID":"15bb49fe-ded6-45cb-b094-05da46c3f9e8","Type":"ContainerDied","Data":"0caf1583fe45f9478ec6a759fa630eecb7221966a84a7d22d76435c4e7d1fba1"} Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.247931 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ktqbh" podStartSLOduration=3.417210377 podStartE2EDuration="25.247900697s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="2026-01-21 07:18:07.661729665 +0000 UTC m=+1428.892075567" lastFinishedPulling="2026-01-21 07:18:29.492419985 +0000 UTC m=+1450.722765887" observedRunningTime="2026-01-21 07:18:31.207297378 +0000 UTC m=+1452.437643280" watchObservedRunningTime="2026-01-21 07:18:31.247900697 +0000 UTC m=+1452.478246599" Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.260457 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f"} Jan 21 07:18:31 crc kubenswrapper[4893]: E0121 07:18:31.262453 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-wrnx6" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.267059 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vjrdh"] Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.298807 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.303437 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qffvz" podStartSLOduration=2.293952565 podStartE2EDuration="25.303285086s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="2026-01-21 07:18:07.619394725 +0000 UTC m=+1428.849740627" lastFinishedPulling="2026-01-21 07:18:30.628727236 +0000 UTC m=+1451.859073148" observedRunningTime="2026-01-21 07:18:31.257221108 +0000 UTC m=+1452.487567010" watchObservedRunningTime="2026-01-21 07:18:31.303285086 +0000 UTC m=+1452.533630988" Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.328463 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:31 crc kubenswrapper[4893]: W0121 07:18:31.528952 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13231972_103e_4970_845c_5aba8c59d68f.slice/crio-b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d WatchSource:0}: Error finding container b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d: Status 404 returned error can't find the container with id b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d Jan 21 07:18:31 crc kubenswrapper[4893]: I0121 07:18:31.625079 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34db2d2f-d623-4567-b27b-12b205e66587" path="/var/lib/kubelet/pods/34db2d2f-d623-4567-b27b-12b205e66587/volumes" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.273041 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerStarted","Data":"ed40e448ab7c7447aa250c02a56baf631440a86cb5d559756ac02e43a4065091"} Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.276544 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vjrdh" event={"ID":"13231972-103e-4970-845c-5aba8c59d68f","Type":"ContainerStarted","Data":"fc3bc80e161dc6978eb1141fe2bc45dcd6639ddf7a96d78648df93508e6f8b96"} Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.276594 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vjrdh" event={"ID":"13231972-103e-4970-845c-5aba8c59d68f","Type":"ContainerStarted","Data":"b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d"} Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.280701 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerStarted","Data":"b8ee096a91a103ddcd42ac9bac6ae684868e8a9bc43404e2173501e612512c0c"} Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.282328 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerStarted","Data":"1fb698e59195ab1db02efd3fa80c373c659becf537bcd616d8bade0b1a11e6ed"} Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.326579 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vjrdh" podStartSLOduration=16.326553874 podStartE2EDuration="16.326553874s" podCreationTimestamp="2026-01-21 07:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:32.320016504 +0000 UTC m=+1453.550362426" watchObservedRunningTime="2026-01-21 07:18:32.326553874 +0000 UTC m=+1453.556899776" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.643533 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.821196 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config\") pod \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.821302 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle\") pod \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.823178 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2lcb\" (UniqueName: \"kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb\") pod \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\" (UID: \"15bb49fe-ded6-45cb-b094-05da46c3f9e8\") " Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.827742 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb" (OuterVolumeSpecName: "kube-api-access-g2lcb") pod "15bb49fe-ded6-45cb-b094-05da46c3f9e8" (UID: "15bb49fe-ded6-45cb-b094-05da46c3f9e8"). InnerVolumeSpecName "kube-api-access-g2lcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.851607 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15bb49fe-ded6-45cb-b094-05da46c3f9e8" (UID: "15bb49fe-ded6-45cb-b094-05da46c3f9e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.865641 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config" (OuterVolumeSpecName: "config") pod "15bb49fe-ded6-45cb-b094-05da46c3f9e8" (UID: "15bb49fe-ded6-45cb-b094-05da46c3f9e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.928827 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.928872 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2lcb\" (UniqueName: \"kubernetes.io/projected/15bb49fe-ded6-45cb-b094-05da46c3f9e8-kube-api-access-g2lcb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:32 crc kubenswrapper[4893]: I0121 07:18:32.928883 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15bb49fe-ded6-45cb-b094-05da46c3f9e8-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.318991 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m2n6w" event={"ID":"15bb49fe-ded6-45cb-b094-05da46c3f9e8","Type":"ContainerDied","Data":"6de305cc6b456bebb453a123d4c0fd98b56145224b1c188c53ada3bf5982660d"} Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.319282 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de305cc6b456bebb453a123d4c0fd98b56145224b1c188c53ada3bf5982660d" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.319375 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m2n6w" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.331131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerStarted","Data":"26ff0aa2f289ea6cf7335eb443663d0785fb0d3634a04da03df4808662de7719"} Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.331178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerStarted","Data":"79ed51f998cd71dfab875d8fdd10cee39d10fb41596a8a2930e7c74edc7a67f1"} Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.331309 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-log" containerID="cri-o://79ed51f998cd71dfab875d8fdd10cee39d10fb41596a8a2930e7c74edc7a67f1" gracePeriod=30 Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.331875 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-httpd" containerID="cri-o://26ff0aa2f289ea6cf7335eb443663d0785fb0d3634a04da03df4808662de7719" gracePeriod=30 Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.336111 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-log" containerID="cri-o://67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" gracePeriod=30 Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.336229 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-httpd" containerID="cri-o://9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" gracePeriod=30 Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.336323 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerStarted","Data":"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49"} Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.336342 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerStarted","Data":"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b"} Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.369376 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.36935166 podStartE2EDuration="20.36935166s" podCreationTimestamp="2026-01-21 07:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:33.361388908 +0000 UTC m=+1454.591734840" watchObservedRunningTime="2026-01-21 07:18:33.36935166 +0000 UTC m=+1454.599697562" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.395311 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.395262052 podStartE2EDuration="21.395262052s" podCreationTimestamp="2026-01-21 07:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:33.383733237 +0000 UTC m=+1454.614079149" watchObservedRunningTime="2026-01-21 07:18:33.395262052 +0000 UTC m=+1454.625607954" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.590444 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:33 crc kubenswrapper[4893]: E0121 07:18:33.590907 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.590920 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" Jan 21 07:18:33 crc kubenswrapper[4893]: E0121 07:18:33.590944 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="init" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.590951 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="init" Jan 21 07:18:33 crc kubenswrapper[4893]: E0121 07:18:33.590983 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15bb49fe-ded6-45cb-b094-05da46c3f9e8" containerName="neutron-db-sync" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.590990 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="15bb49fe-ded6-45cb-b094-05da46c3f9e8" containerName="neutron-db-sync" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.591155 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bb49fe-ded6-45cb-b094-05da46c3f9e8" containerName="neutron-db-sync" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.591172 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="34db2d2f-d623-4567-b27b-12b205e66587" containerName="dnsmasq-dns" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.593617 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.620524 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:33 crc kubenswrapper[4893]: E0121 07:18:33.643245 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e305da0_2d2a_44c3_9844_c2071e281918.slice/crio-67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15bb49fe_ded6_45cb_b094_05da46c3f9e8.slice/crio-6de305cc6b456bebb453a123d4c0fd98b56145224b1c188c53ada3bf5982660d\": RecentStats: unable to find data in memory cache]" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.654450 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.656144 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.664879 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.669803 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l8jtn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.670157 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.670413 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.670596 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.698363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bfr\" (UniqueName: \"kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.698906 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.699180 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.699487 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.700036 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.700165 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802225 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802334 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8j6\" (UniqueName: \"kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802374 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802424 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802454 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802484 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802508 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802537 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802564 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bfr\" (UniqueName: \"kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.802639 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.804180 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.804321 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.805739 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.806638 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.806905 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.837264 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bfr\" (UniqueName: \"kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr\") pod \"dnsmasq-dns-685444497c-vnjsn\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.899210 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.903551 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.903615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.903647 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.908871 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.909301 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8j6\" (UniqueName: \"kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.910690 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.915640 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.917141 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.921960 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:33 crc kubenswrapper[4893]: I0121 07:18:33.935071 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8j6\" (UniqueName: \"kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6\") pod \"neutron-c984d74d4-p75q9\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.200334 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.212372 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.214712 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4ts5\" (UniqueName: \"kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.214788 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.214826 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.214945 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.214969 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.215025 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.218770 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.218946 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs" (OuterVolumeSpecName: "logs") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.219035 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts\") pod \"1e305da0-2d2a-44c3-9844-c2071e281918\" (UID: \"1e305da0-2d2a-44c3-9844-c2071e281918\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.236546 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts" (OuterVolumeSpecName: "scripts") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.236683 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5" (OuterVolumeSpecName: "kube-api-access-f4ts5") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "kube-api-access-f4ts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.236928 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.239404 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.239446 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.239456 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.239468 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4ts5\" (UniqueName: \"kubernetes.io/projected/1e305da0-2d2a-44c3-9844-c2071e281918-kube-api-access-f4ts5\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.239479 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e305da0-2d2a-44c3-9844-c2071e281918-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.280195 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.291347 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.340940 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data" (OuterVolumeSpecName: "config-data") pod "1e305da0-2d2a-44c3-9844-c2071e281918" (UID: "1e305da0-2d2a-44c3-9844-c2071e281918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.341216 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.341236 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e305da0-2d2a-44c3-9844-c2071e281918-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.341250 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.390057 4893 generic.go:334] "Generic (PLEG): container finished" podID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerID="26ff0aa2f289ea6cf7335eb443663d0785fb0d3634a04da03df4808662de7719" exitCode=0 Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.390083 4893 generic.go:334] "Generic (PLEG): container finished" podID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerID="79ed51f998cd71dfab875d8fdd10cee39d10fb41596a8a2930e7c74edc7a67f1" exitCode=143 Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.390132 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerDied","Data":"26ff0aa2f289ea6cf7335eb443663d0785fb0d3634a04da03df4808662de7719"} Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.390158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerDied","Data":"79ed51f998cd71dfab875d8fdd10cee39d10fb41596a8a2930e7c74edc7a67f1"} Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400396 4893 generic.go:334] "Generic (PLEG): container finished" podID="1e305da0-2d2a-44c3-9844-c2071e281918" containerID="9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" exitCode=0 Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400429 4893 generic.go:334] "Generic (PLEG): container finished" podID="1e305da0-2d2a-44c3-9844-c2071e281918" containerID="67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" exitCode=143 Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerDied","Data":"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49"} Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400460 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400482 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerDied","Data":"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b"} Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400501 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e305da0-2d2a-44c3-9844-c2071e281918","Type":"ContainerDied","Data":"ed40e448ab7c7447aa250c02a56baf631440a86cb5d559756ac02e43a4065091"} Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.400519 4893 scope.go:117] "RemoveContainer" containerID="9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.481104 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.497398 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.510925 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.517654 4893 scope.go:117] "RemoveContainer" containerID="67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.521439 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:34 crc kubenswrapper[4893]: E0121 07:18:34.523543 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-httpd" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.523578 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-httpd" Jan 21 07:18:34 crc kubenswrapper[4893]: E0121 07:18:34.523616 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-log" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.523625 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-log" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.523856 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-log" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.523882 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" containerName="glance-httpd" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.525053 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.527791 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.528072 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.532624 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.568771 4893 scope.go:117] "RemoveContainer" containerID="9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" Jan 21 07:18:34 crc kubenswrapper[4893]: E0121 07:18:34.569235 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49\": container with ID starting with 9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49 not found: ID does not exist" containerID="9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.569323 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49"} err="failed to get container status \"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49\": rpc error: code = NotFound desc = could not find container \"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49\": container with ID starting with 9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49 not found: ID does not exist" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.569370 4893 scope.go:117] "RemoveContainer" containerID="67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" Jan 21 07:18:34 crc kubenswrapper[4893]: E0121 07:18:34.570763 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b\": container with ID starting with 67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b not found: ID does not exist" containerID="67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.570812 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b"} err="failed to get container status \"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b\": rpc error: code = NotFound desc = could not find container \"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b\": container with ID starting with 67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b not found: ID does not exist" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.570849 4893 scope.go:117] "RemoveContainer" containerID="9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.571195 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49"} err="failed to get container status \"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49\": rpc error: code = NotFound desc = could not find container \"9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49\": container with ID starting with 9ff274f89bb0e9afe789d12126ba95ea8fd780eb570f0752d37da3820c75ef49 not found: ID does not exist" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.571229 4893 scope.go:117] "RemoveContainer" containerID="67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.575204 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b"} err="failed to get container status \"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b\": rpc error: code = NotFound desc = could not find container \"67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b\": container with ID starting with 67de82f900090587bdfb8d7ef28c549343e4ef9c3ab179b851da0275c0864e1b not found: ID does not exist" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650131 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtt65\" (UniqueName: \"kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650477 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650534 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650564 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650595 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650864 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.650990 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.651372 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754330 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtt65\" (UniqueName: \"kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754401 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754433 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754453 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.754688 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.755170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.755302 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.755988 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.756960 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.760480 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.760998 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.762561 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.766388 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.773253 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtt65\" (UniqueName: \"kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.775726 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.795698 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.856790 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.856901 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.856981 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.857035 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gkww\" (UniqueName: \"kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.857289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.857366 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.857399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run\") pod \"4bbed581-8e85-4849-82d3-ad007e53a1b0\" (UID: \"4bbed581-8e85-4849-82d3-ad007e53a1b0\") " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.858271 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.858562 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs" (OuterVolumeSpecName: "logs") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.859710 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.865935 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.879909 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww" (OuterVolumeSpecName: "kube-api-access-2gkww") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "kube-api-access-2gkww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.880028 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts" (OuterVolumeSpecName: "scripts") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.888328 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.918952 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data" (OuterVolumeSpecName: "config-data") pod "4bbed581-8e85-4849-82d3-ad007e53a1b0" (UID: "4bbed581-8e85-4849-82d3-ad007e53a1b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960359 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960402 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960416 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960428 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bbed581-8e85-4849-82d3-ad007e53a1b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960487 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960501 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bbed581-8e85-4849-82d3-ad007e53a1b0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.960513 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gkww\" (UniqueName: \"kubernetes.io/projected/4bbed581-8e85-4849-82d3-ad007e53a1b0-kube-api-access-2gkww\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:34 crc kubenswrapper[4893]: I0121 07:18:34.983596 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.050445 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.063932 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.466848 4893 generic.go:334] "Generic (PLEG): container finished" podID="9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" containerID="42fb31863a351beff5a671105b6d00013cbdcea51ca1264d353f781aeb836f3b" exitCode=0 Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.466950 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktqbh" event={"ID":"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe","Type":"ContainerDied","Data":"42fb31863a351beff5a671105b6d00013cbdcea51ca1264d353f781aeb836f3b"} Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.469858 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerStarted","Data":"3fcf822f8f294d637824d560799bdf30463e637e4ca6b7124d9907ee98f58e48"} Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.495342 4893 generic.go:334] "Generic (PLEG): container finished" podID="d7e43898-c671-442d-97dd-b93c958c550a" containerID="60f688166eda4d1cccb657d68359251422e6de06cde57e1ef44e96fed4628bcf" exitCode=0 Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.495434 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vnjsn" event={"ID":"d7e43898-c671-442d-97dd-b93c958c550a","Type":"ContainerDied","Data":"60f688166eda4d1cccb657d68359251422e6de06cde57e1ef44e96fed4628bcf"} Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.495461 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vnjsn" event={"ID":"d7e43898-c671-442d-97dd-b93c958c550a","Type":"ContainerStarted","Data":"866d3d375fc849357ddf49df922fc9c97573cbf287f129d11aff777e2c020af1"} Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.499173 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4bbed581-8e85-4849-82d3-ad007e53a1b0","Type":"ContainerDied","Data":"1fb698e59195ab1db02efd3fa80c373c659becf537bcd616d8bade0b1a11e6ed"} Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.499215 4893 scope.go:117] "RemoveContainer" containerID="26ff0aa2f289ea6cf7335eb443663d0785fb0d3634a04da03df4808662de7719" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.499335 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.532579 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.607853 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e305da0-2d2a-44c3-9844-c2071e281918" path="/var/lib/kubelet/pods/1e305da0-2d2a-44c3-9844-c2071e281918/volumes" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.636828 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.654248 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.662536 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:35 crc kubenswrapper[4893]: E0121 07:18:35.662925 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-httpd" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.662942 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-httpd" Jan 21 07:18:35 crc kubenswrapper[4893]: E0121 07:18:35.662972 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-log" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.662978 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-log" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.663165 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-log" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.663185 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" containerName="glance-httpd" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.664101 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.667198 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.667489 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.671253 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863592 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863653 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863744 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863790 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863829 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863901 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.863924 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.864024 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwp8p\" (UniqueName: \"kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.965957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966026 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966071 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966126 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966148 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966199 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwp8p\" (UniqueName: \"kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966220 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.966246 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.967092 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.967729 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.967911 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.975770 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.984963 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.985434 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.985641 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:35 crc kubenswrapper[4893]: I0121 07:18:35.988828 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwp8p\" (UniqueName: \"kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.000270 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.283185 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.510921 4893 generic.go:334] "Generic (PLEG): container finished" podID="13231972-103e-4970-845c-5aba8c59d68f" containerID="fc3bc80e161dc6978eb1141fe2bc45dcd6639ddf7a96d78648df93508e6f8b96" exitCode=0 Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.511000 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vjrdh" event={"ID":"13231972-103e-4970-845c-5aba8c59d68f","Type":"ContainerDied","Data":"fc3bc80e161dc6978eb1141fe2bc45dcd6639ddf7a96d78648df93508e6f8b96"} Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.515030 4893 generic.go:334] "Generic (PLEG): container finished" podID="3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" containerID="0130a18b27166f791112ff30f58560d319f90531ebbeae689609a6794f09b9c7" exitCode=0 Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.515115 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qffvz" event={"ID":"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3","Type":"ContainerDied","Data":"0130a18b27166f791112ff30f58560d319f90531ebbeae689609a6794f09b9c7"} Jan 21 07:18:36 crc kubenswrapper[4893]: I0121 07:18:36.517044 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerStarted","Data":"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47"} Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.076308 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.078589 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.081346 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.085263 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.122538 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.193736 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194175 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194448 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckdzg\" (UniqueName: \"kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194495 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194659 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194783 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.194889 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296567 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296631 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296688 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296715 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296750 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296809 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckdzg\" (UniqueName: \"kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.296832 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.304882 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.304988 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.305592 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.309109 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.309582 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.315341 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.327769 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckdzg\" (UniqueName: \"kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg\") pod \"neutron-577cb64ffc-m6fkr\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.568323 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:37 crc kubenswrapper[4893]: I0121 07:18:37.641018 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbed581-8e85-4849-82d3-ad007e53a1b0" path="/var/lib/kubelet/pods/4bbed581-8e85-4849-82d3-ad007e53a1b0/volumes" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.040529 4893 scope.go:117] "RemoveContainer" containerID="79ed51f998cd71dfab875d8fdd10cee39d10fb41596a8a2930e7c74edc7a67f1" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.148451 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.154293 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.173384 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295403 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295707 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle\") pod \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295727 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295790 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data\") pod \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295814 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zdfr\" (UniqueName: \"kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr\") pod \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295839 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8tkf\" (UniqueName: \"kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf\") pod \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295861 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295888 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle\") pod \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295913 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295965 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs\") pod \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.295983 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fqgt\" (UniqueName: \"kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.296050 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data\") pod \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\" (UID: \"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.296117 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys\") pod \"13231972-103e-4970-845c-5aba8c59d68f\" (UID: \"13231972-103e-4970-845c-5aba8c59d68f\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.296157 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts\") pod \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\" (UID: \"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe\") " Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.297079 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs" (OuterVolumeSpecName: "logs") pod "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" (UID: "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.301594 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts" (OuterVolumeSpecName: "scripts") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.302185 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.302298 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf" (OuterVolumeSpecName: "kube-api-access-c8tkf") pod "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" (UID: "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe"). InnerVolumeSpecName "kube-api-access-c8tkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.303717 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr" (OuterVolumeSpecName: "kube-api-access-8zdfr") pod "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" (UID: "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3"). InnerVolumeSpecName "kube-api-access-8zdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.307594 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts" (OuterVolumeSpecName: "scripts") pod "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" (UID: "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.309821 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt" (OuterVolumeSpecName: "kube-api-access-7fqgt") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "kube-api-access-7fqgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.312162 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.324720 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" (UID: "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.328870 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" (UID: "3ae6eb33-b5b8-4ed9-a227-b96f365a49a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.333719 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data" (OuterVolumeSpecName: "config-data") pod "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" (UID: "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.334907 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.338993 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data" (OuterVolumeSpecName: "config-data") pod "13231972-103e-4970-845c-5aba8c59d68f" (UID: "13231972-103e-4970-845c-5aba8c59d68f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.339761 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" (UID: "9a2b7c6d-f80f-4ae0-9628-30dd29e491fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398607 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zdfr\" (UniqueName: \"kubernetes.io/projected/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-kube-api-access-8zdfr\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398638 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8tkf\" (UniqueName: \"kubernetes.io/projected/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-kube-api-access-c8tkf\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398650 4893 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398659 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398681 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398690 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398700 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fqgt\" (UniqueName: \"kubernetes.io/projected/13231972-103e-4970-845c-5aba8c59d68f-kube-api-access-7fqgt\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398708 4893 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398718 4893 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398727 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398734 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398742 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398749 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13231972-103e-4970-845c-5aba8c59d68f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.398758 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.639091 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vjrdh" event={"ID":"13231972-103e-4970-845c-5aba8c59d68f","Type":"ContainerDied","Data":"b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d"} Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.639134 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bd19d56f36e52d663dde0413202326b0bff8400b15f42de6928d88b4bc579d" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.639211 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vjrdh" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.643458 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktqbh" event={"ID":"9a2b7c6d-f80f-4ae0-9628-30dd29e491fe","Type":"ContainerDied","Data":"4d0f1aea554b54a0a7ae76b8fd8f57cd0f4e5b574c54eaad767d70dfd846a432"} Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.643523 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0f1aea554b54a0a7ae76b8fd8f57cd0f4e5b574c54eaad767d70dfd846a432" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.643542 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktqbh" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.647889 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerStarted","Data":"a85a7a069fa562f294b454b9296ab09d664da5a688536b6cd1874af7563b896d"} Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.655932 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qffvz" event={"ID":"3ae6eb33-b5b8-4ed9-a227-b96f365a49a3","Type":"ContainerDied","Data":"f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38"} Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.655982 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1543e243bd8d94b0d30083f09927b91eb4393bc6be06b7ce9545fb073b2ab38" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.656071 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qffvz" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.677184 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:18:38 crc kubenswrapper[4893]: E0121 07:18:38.677702 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" containerName="barbican-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.677723 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" containerName="barbican-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: E0121 07:18:38.677749 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" containerName="placement-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.677758 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" containerName="placement-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: E0121 07:18:38.677784 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13231972-103e-4970-845c-5aba8c59d68f" containerName="keystone-bootstrap" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.677792 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="13231972-103e-4970-845c-5aba8c59d68f" containerName="keystone-bootstrap" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.677997 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" containerName="barbican-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.678135 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" containerName="placement-db-sync" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.678149 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="13231972-103e-4970-845c-5aba8c59d68f" containerName="keystone-bootstrap" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.678952 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.684293 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.684477 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.686126 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.686425 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.686706 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9b7vz" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.687036 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.703093 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821304 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6d5c\" (UniqueName: \"kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821389 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821420 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821454 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821473 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821495 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821516 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.821549 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.895951 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.901534 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.905928 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.906140 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.906294 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fbjn2" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925204 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v72d7\" (UniqueName: \"kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925277 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925312 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925361 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6d5c\" (UniqueName: \"kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925406 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.925459 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.926951 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927047 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927130 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927164 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927198 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927219 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.927281 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.929941 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.933850 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.942151 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.951279 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.952492 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.955355 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.967091 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6d5c\" (UniqueName: \"kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.969291 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:38 crc kubenswrapper[4893]: I0121 07:18:38.969988 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle\") pod \"keystone-9fd9c4957-2lblr\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.004921 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.011442 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.018644 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.029974 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031141 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v72d7\" (UniqueName: \"kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031199 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031236 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031264 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031298 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031347 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031400 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031448 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031549 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.031574 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xw5r\" (UniqueName: \"kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.033904 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.037807 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.044309 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.056545 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.056626 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.074931 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.086656 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v72d7\" (UniqueName: \"kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7\") pod \"barbican-worker-6bd56d5cbf-gkdlb\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.117544 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.119070 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.133016 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.134943 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.134992 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xw5r\" (UniqueName: \"kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.135041 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.135078 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.135141 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.135532 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.144415 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.150310 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.164394 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.166649 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.169919 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.179984 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.191525 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xw5r\" (UniqueName: \"kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r\") pod \"barbican-keystone-listener-84df8fdfdb-8dxsk\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.210734 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.228965 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237144 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237206 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237379 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237483 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzs8b\" (UniqueName: \"kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.237515 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.335664 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.337052 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.338825 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.338968 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339083 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339170 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339285 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzs8b\" (UniqueName: \"kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339358 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339430 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339506 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339708 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339852 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zhf\" (UniqueName: \"kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.339986 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.341354 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.341365 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.343037 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.343263 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.346706 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.347119 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.347599 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.347800 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.347940 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zfbzz" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.355092 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.374827 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.377386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzs8b\" (UniqueName: \"kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b\") pod \"dnsmasq-dns-66cdd4b5b5-ts5nd\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442270 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcgb\" (UniqueName: \"kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442319 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442336 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442428 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86zhf\" (UniqueName: \"kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442518 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442558 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442825 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442852 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442892 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442922 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.442946 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.447488 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.452235 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.452278 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.454864 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.462825 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86zhf\" (UniqueName: \"kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf\") pod \"barbican-api-6975c6c74b-n77t4\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.462962 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.510264 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.521968 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545187 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcgb\" (UniqueName: \"kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545221 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545246 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545460 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.545488 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.546530 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.548727 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.549101 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.549433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.550849 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.565536 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.569779 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcgb\" (UniqueName: \"kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb\") pod \"placement-54745b6874-xnbrr\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:39 crc kubenswrapper[4893]: I0121 07:18:39.675880 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.129410 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.131610 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.135291 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.135861 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.153041 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.302546 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.302850 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddtf\" (UniqueName: \"kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.302943 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.303166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.303306 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.303386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.303471 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404613 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404714 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404766 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404911 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gddtf\" (UniqueName: \"kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404946 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.404997 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.405292 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.411098 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.411463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.411582 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.414500 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.415437 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.423402 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddtf\" (UniqueName: \"kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf\") pod \"barbican-api-7fc4c6bb88-6pfmp\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:42 crc kubenswrapper[4893]: I0121 07:18:42.454261 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.610080 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:18:43 crc kubenswrapper[4893]: W0121 07:18:43.624912 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0722c0f6_3b88_4b57_bb19_6e63f97b5392.slice/crio-6d5dd010e442bfc1b68a5c0591b8a00bad8e597e3479c615a70c04557f4b9128 WatchSource:0}: Error finding container 6d5dd010e442bfc1b68a5c0591b8a00bad8e597e3479c615a70c04557f4b9128: Status 404 returned error can't find the container with id 6d5dd010e442bfc1b68a5c0591b8a00bad8e597e3479c615a70c04557f4b9128 Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.755465 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerStarted","Data":"38f2063ef2f3f9534be2a583dd44bdf6e6c92f12136106182a4695d712b1d793"} Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.779258 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerStarted","Data":"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18"} Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.779482 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.787622 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerStarted","Data":"6d5dd010e442bfc1b68a5c0591b8a00bad8e597e3479c615a70c04557f4b9128"} Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.806956 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vnjsn" event={"ID":"d7e43898-c671-442d-97dd-b93c958c550a","Type":"ContainerStarted","Data":"38c42abbce1014081a8dc5529864400d15af55a1a974c937c6bed58207f6722d"} Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.807194 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-vnjsn" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="dnsmasq-dns" containerID="cri-o://38c42abbce1014081a8dc5529864400d15af55a1a974c937c6bed58207f6722d" gracePeriod=10 Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.807524 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.819432 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c984d74d4-p75q9" podStartSLOduration=10.819409002 podStartE2EDuration="10.819409002s" podCreationTimestamp="2026-01-21 07:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:43.802983995 +0000 UTC m=+1465.033329897" watchObservedRunningTime="2026-01-21 07:18:43.819409002 +0000 UTC m=+1465.049754904" Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.840491 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-vnjsn" podStartSLOduration=10.840448423 podStartE2EDuration="10.840448423s" podCreationTimestamp="2026-01-21 07:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:43.836507518 +0000 UTC m=+1465.066853430" watchObservedRunningTime="2026-01-21 07:18:43.840448423 +0000 UTC m=+1465.070794345" Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.880133 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.906168 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.916940 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.939330 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:18:43 crc kubenswrapper[4893]: W0121 07:18:43.966281 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b445f12_f3bf_41d9_91f9_56def2b2694b.slice/crio-b437088a8c6a7a0a63a89a293630939664c63c2de3c2fe1a4391b85beb796b1c WatchSource:0}: Error finding container b437088a8c6a7a0a63a89a293630939664c63c2de3c2fe1a4391b85beb796b1c: Status 404 returned error can't find the container with id b437088a8c6a7a0a63a89a293630939664c63c2de3c2fe1a4391b85beb796b1c Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.971284 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.985735 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:18:43 crc kubenswrapper[4893]: I0121 07:18:43.995826 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:18:43 crc kubenswrapper[4893]: W0121 07:18:43.999924 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf869ba7_c70c_4a29_aab0_800fe73624c9.slice/crio-4b01a595b9381f13603cffcb6a96238d47c0f3022b4588b1b4e01d37c31e5d5d WatchSource:0}: Error finding container 4b01a595b9381f13603cffcb6a96238d47c0f3022b4588b1b4e01d37c31e5d5d: Status 404 returned error can't find the container with id 4b01a595b9381f13603cffcb6a96238d47c0f3022b4588b1b4e01d37c31e5d5d Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.005091 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:18:44 crc kubenswrapper[4893]: W0121 07:18:44.039894 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133bbed0_7073_43ad_881b_893cf8529bb2.slice/crio-814ccd67a132cd7486827aa51371d7f8e51e92e723bbe7051d74157c6669b4a8 WatchSource:0}: Error finding container 814ccd67a132cd7486827aa51371d7f8e51e92e723bbe7051d74157c6669b4a8: Status 404 returned error can't find the container with id 814ccd67a132cd7486827aa51371d7f8e51e92e723bbe7051d74157c6669b4a8 Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.823430 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerStarted","Data":"11226c4822e300cb161d9dcb669f9317e41257ca9066c5fdceac4c47886332be"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.825150 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerStarted","Data":"a1846137ad41c4b0f9789c5b2681ad4dab98b9bdee0b11772723bba9628f3821"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.827490 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerStarted","Data":"af2cbd2416ff8e2a96ecf8094812868e567e247c82f334bac61e2985c9c7061b"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.827560 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerStarted","Data":"b437088a8c6a7a0a63a89a293630939664c63c2de3c2fe1a4391b85beb796b1c"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.829844 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerStarted","Data":"1c87cf0b9edbf49255836fb9a7097f437e002d4ccb33505e57a761bc1b4b2e74"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.831622 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9c4957-2lblr" event={"ID":"1cfa1d66-684f-43de-b751-1da2399d48ee","Type":"ContainerStarted","Data":"24ddeef2326940ce72c79871561487f16be27f8317b41bd533f41fba741bbc5b"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.833887 4893 generic.go:334] "Generic (PLEG): container finished" podID="d7e43898-c671-442d-97dd-b93c958c550a" containerID="38c42abbce1014081a8dc5529864400d15af55a1a974c937c6bed58207f6722d" exitCode=0 Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.833922 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vnjsn" event={"ID":"d7e43898-c671-442d-97dd-b93c958c550a","Type":"ContainerDied","Data":"38c42abbce1014081a8dc5529864400d15af55a1a974c937c6bed58207f6722d"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.833961 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-vnjsn" event={"ID":"d7e43898-c671-442d-97dd-b93c958c550a","Type":"ContainerDied","Data":"866d3d375fc849357ddf49df922fc9c97573cbf287f129d11aff777e2c020af1"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.834000 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866d3d375fc849357ddf49df922fc9c97573cbf287f129d11aff777e2c020af1" Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.835370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerStarted","Data":"a6c319ab4ff855d9c777138b1fbf10d585bf21ca01fc7489aaf2806ec4fae1c3"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.837528 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerStarted","Data":"814ccd67a132cd7486827aa51371d7f8e51e92e723bbe7051d74157c6669b4a8"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.838922 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerStarted","Data":"0f26655ad28569c69d734395f473a8d64f482102fd227b20cad19f7e37359bfc"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.841916 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" event={"ID":"cf869ba7-c70c-4a29-aab0-800fe73624c9","Type":"ContainerStarted","Data":"4b01a595b9381f13603cffcb6a96238d47c0f3022b4588b1b4e01d37c31e5d5d"} Jan 21 07:18:44 crc kubenswrapper[4893]: I0121 07:18:44.968912 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055180 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055228 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055314 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055354 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055380 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5bfr\" (UniqueName: \"kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.055402 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config\") pod \"d7e43898-c671-442d-97dd-b93c958c550a\" (UID: \"d7e43898-c671-442d-97dd-b93c958c550a\") " Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.085438 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr" (OuterVolumeSpecName: "kube-api-access-g5bfr") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "kube-api-access-g5bfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.157829 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5bfr\" (UniqueName: \"kubernetes.io/projected/d7e43898-c671-442d-97dd-b93c958c550a-kube-api-access-g5bfr\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.263767 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.267496 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.267772 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.274540 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config" (OuterVolumeSpecName: "config") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.279916 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7e43898-c671-442d-97dd-b93c958c550a" (UID: "d7e43898-c671-442d-97dd-b93c958c550a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.361958 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.361992 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.362002 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.362016 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:45 crc kubenswrapper[4893]: I0121 07:18:45.362025 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e43898-c671-442d-97dd-b93c958c550a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.024038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerStarted","Data":"2c2e4963838a51923436692bec77d73dc438b926f3c5c0edc268cb6c72480f66"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.051430 4893 generic.go:334] "Generic (PLEG): container finished" podID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerID="2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f" exitCode=0 Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.051542 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" event={"ID":"cf869ba7-c70c-4a29-aab0-800fe73624c9","Type":"ContainerDied","Data":"2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.059823 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerStarted","Data":"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.063429 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9c4957-2lblr" event={"ID":"1cfa1d66-684f-43de-b751-1da2399d48ee","Type":"ContainerStarted","Data":"fb79006d33020516a4f0e2561b74cb58a9f9a5735dfedb4b98b82f935997165d"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.066398 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.083179 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerStarted","Data":"4427a1db0d8bc328cc8ce5e801b5da395783cf9fe7637b09d71c5d4865aa6b5f"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.091220 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerStarted","Data":"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.094005 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerStarted","Data":"7f7d2aeb9b4cbaf2e08372f0fc88c8fdf81814a1c30309f7310a68b860cbf2b7"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.094979 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.095003 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.098737 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-vnjsn" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.098789 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerStarted","Data":"919e654e4d670ba6dd7cf0ef844b18f58a16ea771df0a60b1a7a9136cabb10a9"} Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.103816 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9fd9c4957-2lblr" podStartSLOduration=8.103776454 podStartE2EDuration="8.103776454s" podCreationTimestamp="2026-01-21 07:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:46.103053253 +0000 UTC m=+1467.333399165" watchObservedRunningTime="2026-01-21 07:18:46.103776454 +0000 UTC m=+1467.334122346" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.138181 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.138158753 podStartE2EDuration="12.138158753s" podCreationTimestamp="2026-01-21 07:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:46.134164617 +0000 UTC m=+1467.364510519" watchObservedRunningTime="2026-01-21 07:18:46.138158753 +0000 UTC m=+1467.368504655" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.164332 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podStartSLOduration=4.164310832 podStartE2EDuration="4.164310832s" podCreationTimestamp="2026-01-21 07:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:46.159894814 +0000 UTC m=+1467.390240726" watchObservedRunningTime="2026-01-21 07:18:46.164310832 +0000 UTC m=+1467.394656734" Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.183354 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:46 crc kubenswrapper[4893]: I0121 07:18:46.189979 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-vnjsn"] Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.116146 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerStarted","Data":"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.116523 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.116535 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.121488 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerStarted","Data":"07919e653d69657ea7b011e6891aec998b0e961f74741efc99381bb2776ca73d"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.121632 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.124381 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerStarted","Data":"911526d6926efcd3de4bef0ee4d5862491c677b9a9b4639aa131893753ece29e"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.127711 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" event={"ID":"cf869ba7-c70c-4a29-aab0-800fe73624c9","Type":"ContainerStarted","Data":"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.127860 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.245780 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54745b6874-xnbrr" podStartSLOduration=8.24574583 podStartE2EDuration="8.24574583s" podCreationTimestamp="2026-01-21 07:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:47.243922817 +0000 UTC m=+1468.474268719" watchObservedRunningTime="2026-01-21 07:18:47.24574583 +0000 UTC m=+1468.476091732" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.280762 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" podStartSLOduration=8.280732816 podStartE2EDuration="8.280732816s" podCreationTimestamp="2026-01-21 07:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:47.277831262 +0000 UTC m=+1468.508177164" watchObservedRunningTime="2026-01-21 07:18:47.280732816 +0000 UTC m=+1468.511078718" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.295038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerStarted","Data":"c88d130cc82c49bf6ae1c611cdbaa9e2ce62ffa7e1d23413d4010afe63beedd5"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.315124 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-577cb64ffc-m6fkr" podStartSLOduration=10.315101334 podStartE2EDuration="10.315101334s" podCreationTimestamp="2026-01-21 07:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:47.310708977 +0000 UTC m=+1468.541054879" watchObservedRunningTime="2026-01-21 07:18:47.315101334 +0000 UTC m=+1468.545447236" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.316555 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerStarted","Data":"90b487ab46ba83554fb2449c1a655d2268eec3beb3c58bd332660be453a2aedc"} Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.317272 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.317331 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.339753 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6975c6c74b-n77t4" podStartSLOduration=8.339731729 podStartE2EDuration="8.339731729s" podCreationTimestamp="2026-01-21 07:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:47.335601249 +0000 UTC m=+1468.565947151" watchObservedRunningTime="2026-01-21 07:18:47.339731729 +0000 UTC m=+1468.570077631" Jan 21 07:18:47 crc kubenswrapper[4893]: I0121 07:18:47.791203 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e43898-c671-442d-97dd-b93c958c550a" path="/var/lib/kubelet/pods/d7e43898-c671-442d-97dd-b93c958c550a/volumes" Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.330546 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerStarted","Data":"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7"} Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.336738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerStarted","Data":"87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300"} Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.339940 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrnx6" event={"ID":"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619","Type":"ContainerStarted","Data":"eb9bfa7e6d2c3f6e676c90c70098ed1b209a57c6c641ebe5b08bfb896a4460b3"} Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.344663 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerStarted","Data":"186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4"} Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.364314 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.364288415 podStartE2EDuration="13.364288415s" podCreationTimestamp="2026-01-21 07:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:18:48.3568841 +0000 UTC m=+1469.587230002" watchObservedRunningTime="2026-01-21 07:18:48.364288415 +0000 UTC m=+1469.594634317" Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.387852 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" podStartSLOduration=7.551028692 podStartE2EDuration="10.387822029s" podCreationTimestamp="2026-01-21 07:18:38 +0000 UTC" firstStartedPulling="2026-01-21 07:18:43.970392106 +0000 UTC m=+1465.200738008" lastFinishedPulling="2026-01-21 07:18:46.807185443 +0000 UTC m=+1468.037531345" observedRunningTime="2026-01-21 07:18:48.374243794 +0000 UTC m=+1469.604589696" watchObservedRunningTime="2026-01-21 07:18:48.387822029 +0000 UTC m=+1469.618167931" Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.407893 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" podStartSLOduration=7.574287677 podStartE2EDuration="10.407861s" podCreationTimestamp="2026-01-21 07:18:38 +0000 UTC" firstStartedPulling="2026-01-21 07:18:43.984737643 +0000 UTC m=+1465.215083545" lastFinishedPulling="2026-01-21 07:18:46.818310966 +0000 UTC m=+1468.048656868" observedRunningTime="2026-01-21 07:18:48.394845672 +0000 UTC m=+1469.625191574" watchObservedRunningTime="2026-01-21 07:18:48.407861 +0000 UTC m=+1469.638206902" Jan 21 07:18:48 crc kubenswrapper[4893]: I0121 07:18:48.427650 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wrnx6" podStartSLOduration=3.048108479 podStartE2EDuration="42.427629985s" podCreationTimestamp="2026-01-21 07:18:06 +0000 UTC" firstStartedPulling="2026-01-21 07:18:07.447028849 +0000 UTC m=+1428.677374751" lastFinishedPulling="2026-01-21 07:18:46.826550345 +0000 UTC m=+1468.056896257" observedRunningTime="2026-01-21 07:18:48.421214228 +0000 UTC m=+1469.651560130" watchObservedRunningTime="2026-01-21 07:18:48.427629985 +0000 UTC m=+1469.657975897" Jan 21 07:18:53 crc kubenswrapper[4893]: I0121 07:18:53.508804 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 07:18:53 crc kubenswrapper[4893]: I0121 07:18:53.513568 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.276180 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.332274 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.421728 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.422864 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" containerID="cri-o://90b487ab46ba83554fb2449c1a655d2268eec3beb3c58bd332660be453a2aedc" gracePeriod=30 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.426822 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" containerID="cri-o://4427a1db0d8bc328cc8ce5e801b5da395783cf9fe7637b09d71c5d4865aa6b5f" gracePeriod=30 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.437254 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Jan 21 07:18:54 crc kubenswrapper[4893]: E0121 07:18:54.496399 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.511738 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.519590 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerStarted","Data":"b31663f1992e2af6945268e98496bc994813d8dc47da9902cf409a585e463f12"} Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.519729 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.519743 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="proxy-httpd" containerID="cri-o://b31663f1992e2af6945268e98496bc994813d8dc47da9902cf409a585e463f12" gracePeriod=30 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.519780 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="sg-core" containerID="cri-o://38f2063ef2f3f9534be2a583dd44bdf6e6c92f12136106182a4695d712b1d793" gracePeriod=30 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.529582 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="ceilometer-notification-agent" containerID="cri-o://b8ee096a91a103ddcd42ac9bac6ae684868e8a9bc43404e2173501e612512c0c" gracePeriod=30 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.539175 4893 generic.go:334] "Generic (PLEG): container finished" podID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" containerID="eb9bfa7e6d2c3f6e676c90c70098ed1b209a57c6c641ebe5b08bfb896a4460b3" exitCode=0 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.539339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrnx6" event={"ID":"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619","Type":"ContainerDied","Data":"eb9bfa7e6d2c3f6e676c90c70098ed1b209a57c6c641ebe5b08bfb896a4460b3"} Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.860934 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.861078 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.899400 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.899870 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="dnsmasq-dns" containerID="cri-o://4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618" gracePeriod=10 Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.924509 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 07:18:54 crc kubenswrapper[4893]: I0121 07:18:54.930852 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.396237 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.550214 4893 generic.go:334] "Generic (PLEG): container finished" podID="12e11571-a021-4df2-a0da-69f56335a8c8" containerID="38f2063ef2f3f9534be2a583dd44bdf6e6c92f12136106182a4695d712b1d793" exitCode=2 Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.550339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerDied","Data":"38f2063ef2f3f9534be2a583dd44bdf6e6c92f12136106182a4695d712b1d793"} Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.553473 4893 generic.go:334] "Generic (PLEG): container finished" podID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerID="4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618" exitCode=0 Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.553584 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" event={"ID":"a7f876d3-b77c-46b7-98da-23948f79fd05","Type":"ContainerDied","Data":"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618"} Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.553728 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" event={"ID":"a7f876d3-b77c-46b7-98da-23948f79fd05","Type":"ContainerDied","Data":"c8959effe56217d2a7dce3d9e655e5ab095256ccf7c718c49ff3afcd9040de10"} Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.553818 4893 scope.go:117] "RemoveContainer" containerID="4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.553620 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-zvqkt" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.562576 4893 generic.go:334] "Generic (PLEG): container finished" podID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerID="4427a1db0d8bc328cc8ce5e801b5da395783cf9fe7637b09d71c5d4865aa6b5f" exitCode=143 Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.562710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerDied","Data":"4427a1db0d8bc328cc8ce5e801b5da395783cf9fe7637b09d71c5d4865aa6b5f"} Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.563114 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.563330 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567314 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567390 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567436 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567473 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsv4g\" (UniqueName: \"kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567574 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.567638 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc\") pod \"a7f876d3-b77c-46b7-98da-23948f79fd05\" (UID: \"a7f876d3-b77c-46b7-98da-23948f79fd05\") " Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.580173 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g" (OuterVolumeSpecName: "kube-api-access-hsv4g") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "kube-api-access-hsv4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.640950 4893 scope.go:117] "RemoveContainer" containerID="f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.674395 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsv4g\" (UniqueName: \"kubernetes.io/projected/a7f876d3-b77c-46b7-98da-23948f79fd05-kube-api-access-hsv4g\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.688296 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.700571 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.704571 4893 scope.go:117] "RemoveContainer" containerID="4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618" Jan 21 07:18:55 crc kubenswrapper[4893]: E0121 07:18:55.705434 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618\": container with ID starting with 4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618 not found: ID does not exist" containerID="4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.705482 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618"} err="failed to get container status \"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618\": rpc error: code = NotFound desc = could not find container \"4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618\": container with ID starting with 4e66118fe3ca53fd7b00317ae3e4293c8ae81730076d34e289f44621e7ece618 not found: ID does not exist" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.705545 4893 scope.go:117] "RemoveContainer" containerID="f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6" Jan 21 07:18:55 crc kubenswrapper[4893]: E0121 07:18:55.705802 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6\": container with ID starting with f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6 not found: ID does not exist" containerID="f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.705825 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6"} err="failed to get container status \"f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6\": rpc error: code = NotFound desc = could not find container \"f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6\": container with ID starting with f8153be162818e61258a1cfeae6bd61fac1aadb903a09f08d8e3e896a1188ca6 not found: ID does not exist" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.712030 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config" (OuterVolumeSpecName: "config") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.721332 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.729257 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7f876d3-b77c-46b7-98da-23948f79fd05" (UID: "a7f876d3-b77c-46b7-98da-23948f79fd05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.776016 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.776053 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.776069 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.776084 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.776098 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7f876d3-b77c-46b7-98da-23948f79fd05-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.944533 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.956633 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-zvqkt"] Jan 21 07:18:55 crc kubenswrapper[4893]: I0121 07:18:55.968005 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.088891 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.088973 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.089058 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.089188 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.089204 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.089254 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64976\" (UniqueName: \"kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976\") pod \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\" (UID: \"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619\") " Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.090214 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.095799 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.095834 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts" (OuterVolumeSpecName: "scripts") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.096003 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976" (OuterVolumeSpecName: "kube-api-access-64976") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "kube-api-access-64976". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.120018 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.137863 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data" (OuterVolumeSpecName: "config-data") pod "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" (UID: "5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191649 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191715 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191730 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64976\" (UniqueName: \"kubernetes.io/projected/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-kube-api-access-64976\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191741 4893 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191752 4893 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.191786 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.402622 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.402686 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.467394 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.492990 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.579184 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wrnx6" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.580132 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wrnx6" event={"ID":"5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619","Type":"ContainerDied","Data":"dfc7e157f158e3ce60226ca97fc4982b9d7209482bad9a0b71d5a1ee31c86d25"} Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.580176 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc7e157f158e3ce60226ca97fc4982b9d7209482bad9a0b71d5a1ee31c86d25" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.580204 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:56 crc kubenswrapper[4893]: I0121 07:18:56.581258 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.338979 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:18:57 crc kubenswrapper[4893]: E0121 07:18:57.348293 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="init" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348332 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="init" Jan 21 07:18:57 crc kubenswrapper[4893]: E0121 07:18:57.348362 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348384 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: E0121 07:18:57.348423 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" containerName="cinder-db-sync" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348433 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" containerName="cinder-db-sync" Jan 21 07:18:57 crc kubenswrapper[4893]: E0121 07:18:57.348474 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348486 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: E0121 07:18:57.348504 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="init" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348513 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="init" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348910 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e43898-c671-442d-97dd-b93c958c550a" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348959 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" containerName="dnsmasq-dns" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.348990 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" containerName="cinder-db-sync" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.350443 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.649442 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f876d3-b77c-46b7-98da-23948f79fd05" path="/var/lib/kubelet/pods/a7f876d3-b77c-46b7-98da-23948f79fd05/volumes" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.716075 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722172 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722221 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722251 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722310 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722343 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.722363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wq2\" (UniqueName: \"kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.736018 4893 generic.go:334] "Generic (PLEG): container finished" podID="12e11571-a021-4df2-a0da-69f56335a8c8" containerID="b8ee096a91a103ddcd42ac9bac6ae684868e8a9bc43404e2173501e612512c0c" exitCode=0 Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.736906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerDied","Data":"b8ee096a91a103ddcd42ac9bac6ae684868e8a9bc43404e2173501e612512c0c"} Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.751893 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.753558 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842617 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842700 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842733 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842784 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842825 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.842847 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9wq2\" (UniqueName: \"kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.843918 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.846742 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.847285 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.849588 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.849606 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.856302 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nzqck" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.856634 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.856811 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.856974 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.880009 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.893815 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9wq2\" (UniqueName: \"kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2\") pod \"dnsmasq-dns-75dbb546bf-6d8wf\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.925853 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.927730 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.932744 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.933405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944163 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qs4p\" (UniqueName: \"kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944232 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944307 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944323 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944338 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.944369 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:57 crc kubenswrapper[4893]: I0121 07:18:57.959325 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.046591 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.046959 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ct6l\" (UniqueName: \"kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.046989 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047012 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047031 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047482 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qs4p\" (UniqueName: \"kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047565 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047691 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047734 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.047833 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.048159 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.048200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.048221 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.048419 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.052422 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.054196 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.054321 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.065886 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.066699 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qs4p\" (UniqueName: \"kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p\") pod \"cinder-scheduler-0\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.154988 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155105 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155182 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ct6l\" (UniqueName: \"kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155275 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155419 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.155501 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.363001 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.366501 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.367443 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.372071 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.373856 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.373857 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.375215 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.393773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ct6l\" (UniqueName: \"kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l\") pod \"cinder-api-0\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.635621 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.684318 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.779577 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.779839 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:18:58 crc kubenswrapper[4893]: I0121 07:18:58.780016 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" event={"ID":"123f1844-92a5-418f-a3df-b9f44943a91d","Type":"ContainerStarted","Data":"de66e951dd122f8d9fccafe5cd702a040146481d8af1fda0f512de4ef942e9b4"} Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.210529 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.664731 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.719288 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.877818 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerStarted","Data":"1c51f3ff0ec725adba1b9d8a2ea8740e4fb58a7231453e1a830ddacef966ec3b"} Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.885591 4893 generic.go:334] "Generic (PLEG): container finished" podID="123f1844-92a5-418f-a3df-b9f44943a91d" containerID="7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab" exitCode=0 Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.885716 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" event={"ID":"123f1844-92a5-418f-a3df-b9f44943a91d","Type":"ContainerDied","Data":"7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab"} Jan 21 07:18:59 crc kubenswrapper[4893]: I0121 07:18:59.891473 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.045253 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.045356 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.227536 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.904098 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerStarted","Data":"0d31d39963ef6b1e376cb536297ce22ed8b29df071bc5ae345a677a5aaf86871"} Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.919169 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" event={"ID":"123f1844-92a5-418f-a3df-b9f44943a91d","Type":"ContainerStarted","Data":"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca"} Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.919289 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:19:00 crc kubenswrapper[4893]: I0121 07:19:00.954329 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" podStartSLOduration=3.954307595 podStartE2EDuration="3.954307595s" podCreationTimestamp="2026-01-21 07:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:00.951785662 +0000 UTC m=+1482.182131564" watchObservedRunningTime="2026-01-21 07:19:00.954307595 +0000 UTC m=+1482.184653487" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.141849 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41050->10.217.0.161:9311: read: connection reset by peer" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.145101 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6975c6c74b-n77t4" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41066->10.217.0.161:9311: read: connection reset by peer" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.411265 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.411680 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.949217 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.960066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerStarted","Data":"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d"} Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.966352 4893 generic.go:334] "Generic (PLEG): container finished" podID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerID="90b487ab46ba83554fb2449c1a655d2268eec3beb3c58bd332660be453a2aedc" exitCode=0 Jan 21 07:19:01 crc kubenswrapper[4893]: I0121 07:19:01.967999 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerDied","Data":"90b487ab46ba83554fb2449c1a655d2268eec3beb3c58bd332660be453a2aedc"} Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.135153 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.395957 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.556827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs\") pod \"096c770d-b6c0-4e2d-85fd-a06335c5778d\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.556869 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle\") pod \"096c770d-b6c0-4e2d-85fd-a06335c5778d\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.556916 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom\") pod \"096c770d-b6c0-4e2d-85fd-a06335c5778d\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.557048 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86zhf\" (UniqueName: \"kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf\") pod \"096c770d-b6c0-4e2d-85fd-a06335c5778d\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.557094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data\") pod \"096c770d-b6c0-4e2d-85fd-a06335c5778d\" (UID: \"096c770d-b6c0-4e2d-85fd-a06335c5778d\") " Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.557516 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs" (OuterVolumeSpecName: "logs") pod "096c770d-b6c0-4e2d-85fd-a06335c5778d" (UID: "096c770d-b6c0-4e2d-85fd-a06335c5778d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.561334 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf" (OuterVolumeSpecName: "kube-api-access-86zhf") pod "096c770d-b6c0-4e2d-85fd-a06335c5778d" (UID: "096c770d-b6c0-4e2d-85fd-a06335c5778d"). InnerVolumeSpecName "kube-api-access-86zhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.563777 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "096c770d-b6c0-4e2d-85fd-a06335c5778d" (UID: "096c770d-b6c0-4e2d-85fd-a06335c5778d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.601989 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "096c770d-b6c0-4e2d-85fd-a06335c5778d" (UID: "096c770d-b6c0-4e2d-85fd-a06335c5778d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.641320 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data" (OuterVolumeSpecName: "config-data") pod "096c770d-b6c0-4e2d-85fd-a06335c5778d" (UID: "096c770d-b6c0-4e2d-85fd-a06335c5778d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.659138 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/096c770d-b6c0-4e2d-85fd-a06335c5778d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.659178 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.659191 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.659200 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86zhf\" (UniqueName: \"kubernetes.io/projected/096c770d-b6c0-4e2d-85fd-a06335c5778d-kube-api-access-86zhf\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.659209 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/096c770d-b6c0-4e2d-85fd-a06335c5778d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.985409 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerStarted","Data":"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f"} Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.986526 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerStarted","Data":"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8"} Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.993604 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerStarted","Data":"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986"} Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.993728 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api-log" containerID="cri-o://2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" gracePeriod=30 Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.993837 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 07:19:02 crc kubenswrapper[4893]: I0121 07:19:02.993879 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api" containerID="cri-o://7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" gracePeriod=30 Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.002944 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6975c6c74b-n77t4" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.003021 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6975c6c74b-n77t4" event={"ID":"096c770d-b6c0-4e2d-85fd-a06335c5778d","Type":"ContainerDied","Data":"11226c4822e300cb161d9dcb669f9317e41257ca9066c5fdceac4c47886332be"} Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.003070 4893 scope.go:117] "RemoveContainer" containerID="90b487ab46ba83554fb2449c1a655d2268eec3beb3c58bd332660be453a2aedc" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.010225 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.321855589 podStartE2EDuration="6.010198682s" podCreationTimestamp="2026-01-21 07:18:57 +0000 UTC" firstStartedPulling="2026-01-21 07:18:59.228922516 +0000 UTC m=+1480.459268418" lastFinishedPulling="2026-01-21 07:19:00.917265609 +0000 UTC m=+1482.147611511" observedRunningTime="2026-01-21 07:19:03.006222466 +0000 UTC m=+1484.236568368" watchObservedRunningTime="2026-01-21 07:19:03.010198682 +0000 UTC m=+1484.240544584" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.044381 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.044355714 podStartE2EDuration="6.044355714s" podCreationTimestamp="2026-01-21 07:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:03.035543928 +0000 UTC m=+1484.265889820" watchObservedRunningTime="2026-01-21 07:19:03.044355714 +0000 UTC m=+1484.274701626" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.053822 4893 scope.go:117] "RemoveContainer" containerID="4427a1db0d8bc328cc8ce5e801b5da395783cf9fe7637b09d71c5d4865aa6b5f" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.061833 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.069612 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6975c6c74b-n77t4"] Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.374782 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.591704 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" path="/var/lib/kubelet/pods/096c770d-b6c0-4e2d-85fd-a06335c5778d/volumes" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.629758 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684060 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684146 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684226 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684258 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ct6l\" (UniqueName: \"kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684287 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684447 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.684526 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle\") pod \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\" (UID: \"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3\") " Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.687124 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs" (OuterVolumeSpecName: "logs") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.688107 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.693193 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts" (OuterVolumeSpecName: "scripts") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.693637 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l" (OuterVolumeSpecName: "kube-api-access-4ct6l") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "kube-api-access-4ct6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.696947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.720429 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.749032 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data" (OuterVolumeSpecName: "config-data") pod "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" (UID: "7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787806 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ct6l\" (UniqueName: \"kubernetes.io/projected/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-kube-api-access-4ct6l\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787889 4893 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787905 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787914 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787926 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787934 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:03 crc kubenswrapper[4893]: I0121 07:19:03.787942 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015619 4893 generic.go:334] "Generic (PLEG): container finished" podID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerID="7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" exitCode=0 Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015659 4893 generic.go:334] "Generic (PLEG): container finished" podID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerID="2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" exitCode=143 Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015857 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerDied","Data":"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986"} Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015832 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015936 4893 scope.go:117] "RemoveContainer" containerID="7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.015922 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerDied","Data":"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d"} Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.016244 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3","Type":"ContainerDied","Data":"0d31d39963ef6b1e376cb536297ce22ed8b29df071bc5ae345a677a5aaf86871"} Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.061982 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.077170 4893 scope.go:117] "RemoveContainer" containerID="2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.081837 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.100792 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.101287 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101299 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.101314 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101320 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.101333 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101338 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.101350 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101355 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101571 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101586 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101594 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" containerName="cinder-api" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.101601 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="096c770d-b6c0-4e2d-85fd-a06335c5778d" containerName="barbican-api-log" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.102781 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.106580 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.106757 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.107383 4893 scope.go:117] "RemoveContainer" containerID="7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.107517 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.113173 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986\": container with ID starting with 7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986 not found: ID does not exist" containerID="7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.113223 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986"} err="failed to get container status \"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986\": rpc error: code = NotFound desc = could not find container \"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986\": container with ID starting with 7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986 not found: ID does not exist" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.113269 4893 scope.go:117] "RemoveContainer" containerID="2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.114372 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:04 crc kubenswrapper[4893]: E0121 07:19:04.115116 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d\": container with ID starting with 2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d not found: ID does not exist" containerID="2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.115179 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d"} err="failed to get container status \"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d\": rpc error: code = NotFound desc = could not find container \"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d\": container with ID starting with 2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d not found: ID does not exist" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.115208 4893 scope.go:117] "RemoveContainer" containerID="7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.117469 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986"} err="failed to get container status \"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986\": rpc error: code = NotFound desc = could not find container \"7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986\": container with ID starting with 7c470b57abbf57cce04f2040e4a5bd292fe1b94c7101a6131f7ae3727a8c7986 not found: ID does not exist" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.117502 4893 scope.go:117] "RemoveContainer" containerID="2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.117757 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d"} err="failed to get container status \"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d\": rpc error: code = NotFound desc = could not find container \"2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d\": container with ID starting with 2daad0eba03eaddbe35ee5c5e835e28e9df0af4d33d51ec50197aac18ec2855d not found: ID does not exist" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.196434 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.196502 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.196563 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.196870 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.197070 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.197195 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.197236 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.197299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.197442 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4qx\" (UniqueName: \"kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.221561 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395324 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395379 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395420 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395479 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4qx\" (UniqueName: \"kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395510 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395532 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395560 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395580 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.395631 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.397269 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.398872 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.400393 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.401704 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.408549 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.413990 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.417098 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.418711 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.422286 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4qx\" (UniqueName: \"kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx\") pod \"cinder-api-0\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " pod="openstack/cinder-api-0" Jan 21 07:19:04 crc kubenswrapper[4893]: I0121 07:19:04.437211 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:19:05 crc kubenswrapper[4893]: I0121 07:19:05.024992 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:19:05 crc kubenswrapper[4893]: I0121 07:19:05.594471 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3" path="/var/lib/kubelet/pods/7cae190f-d62c-44e4-9f18-b1ff4b9ecfd3/volumes" Jan 21 07:19:06 crc kubenswrapper[4893]: I0121 07:19:06.042856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerStarted","Data":"b233f3f10881d6ab9bfb3f123d866143df46653ea77405b1477d41577b5b9d37"} Jan 21 07:19:06 crc kubenswrapper[4893]: I0121 07:19:06.043276 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerStarted","Data":"09d9fe65a3c699386efeaee2ffed31652230db3bd9302407ca1b37af6576a719"} Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.059494 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerStarted","Data":"fb8af694018c30b6b38db1c567cc9a482101811cee291371c4cbd5248400b963"} Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.060259 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.094655 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.094626343 podStartE2EDuration="3.094626343s" podCreationTimestamp="2026-01-21 07:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:07.080155772 +0000 UTC m=+1488.310501674" watchObservedRunningTime="2026-01-21 07:19:07.094626343 +0000 UTC m=+1488.324972245" Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.262038 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.600709 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.677794 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.678057 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c984d74d4-p75q9" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-api" containerID="cri-o://cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47" gracePeriod=30 Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.678130 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c984d74d4-p75q9" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-httpd" containerID="cri-o://de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18" gracePeriod=30 Jan 21 07:19:07 crc kubenswrapper[4893]: I0121 07:19:07.934899 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.003400 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.003731 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="dnsmasq-dns" containerID="cri-o://6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3" gracePeriod=10 Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.073531 4893 generic.go:334] "Generic (PLEG): container finished" podID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerID="de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18" exitCode=0 Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.073625 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerDied","Data":"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18"} Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.615645 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.705022 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.734624 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.734827 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzs8b\" (UniqueName: \"kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.734894 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.735022 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.735080 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.735130 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0\") pod \"cf869ba7-c70c-4a29-aab0-800fe73624c9\" (UID: \"cf869ba7-c70c-4a29-aab0-800fe73624c9\") " Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.741031 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.747561 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b" (OuterVolumeSpecName: "kube-api-access-zzs8b") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "kube-api-access-zzs8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.847878 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzs8b\" (UniqueName: \"kubernetes.io/projected/cf869ba7-c70c-4a29-aab0-800fe73624c9-kube-api-access-zzs8b\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.883938 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.890074 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.890344 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config" (OuterVolumeSpecName: "config") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.895422 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.918736 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf869ba7-c70c-4a29-aab0-800fe73624c9" (UID: "cf869ba7-c70c-4a29-aab0-800fe73624c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.950223 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.950269 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.950285 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.950303 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:08 crc kubenswrapper[4893]: I0121 07:19:08.950316 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf869ba7-c70c-4a29-aab0-800fe73624c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095074 4893 generic.go:334] "Generic (PLEG): container finished" podID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerID="6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3" exitCode=0 Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095142 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095146 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" event={"ID":"cf869ba7-c70c-4a29-aab0-800fe73624c9","Type":"ContainerDied","Data":"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3"} Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095229 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-ts5nd" event={"ID":"cf869ba7-c70c-4a29-aab0-800fe73624c9","Type":"ContainerDied","Data":"4b01a595b9381f13603cffcb6a96238d47c0f3022b4588b1b4e01d37c31e5d5d"} Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095255 4893 scope.go:117] "RemoveContainer" containerID="6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095292 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="cinder-scheduler" containerID="cri-o://f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8" gracePeriod=30 Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.095387 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="probe" containerID="cri-o://1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f" gracePeriod=30 Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.127645 4893 scope.go:117] "RemoveContainer" containerID="2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.140722 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.151257 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-ts5nd"] Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.175603 4893 scope.go:117] "RemoveContainer" containerID="6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3" Jan 21 07:19:09 crc kubenswrapper[4893]: E0121 07:19:09.176047 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3\": container with ID starting with 6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3 not found: ID does not exist" containerID="6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.176080 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3"} err="failed to get container status \"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3\": rpc error: code = NotFound desc = could not find container \"6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3\": container with ID starting with 6f20152ffc8fe500e29f904ba1db15e1e416bc362ac3ded96c47ec9ea8d1ded3 not found: ID does not exist" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.176101 4893 scope.go:117] "RemoveContainer" containerID="2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f" Jan 21 07:19:09 crc kubenswrapper[4893]: E0121 07:19:09.176362 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f\": container with ID starting with 2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f not found: ID does not exist" containerID="2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.176383 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f"} err="failed to get container status \"2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f\": rpc error: code = NotFound desc = could not find container \"2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f\": container with ID starting with 2805931982a7f0a77e8a5e341c34d717781eb2644188defe39e4549832429f4f not found: ID does not exist" Jan 21 07:19:09 crc kubenswrapper[4893]: I0121 07:19:09.596524 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" path="/var/lib/kubelet/pods/cf869ba7-c70c-4a29-aab0-800fe73624c9/volumes" Jan 21 07:19:10 crc kubenswrapper[4893]: I0121 07:19:10.106517 4893 generic.go:334] "Generic (PLEG): container finished" podID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerID="1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f" exitCode=0 Jan 21 07:19:10 crc kubenswrapper[4893]: I0121 07:19:10.106699 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerDied","Data":"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f"} Jan 21 07:19:10 crc kubenswrapper[4893]: I0121 07:19:10.705489 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:19:10 crc kubenswrapper[4893]: I0121 07:19:10.716269 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:19:10 crc kubenswrapper[4893]: I0121 07:19:10.717226 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.051332 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.080472 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.161608 4893 generic.go:334] "Generic (PLEG): container finished" podID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerID="cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47" exitCode=0 Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.161720 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerDied","Data":"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47"} Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.161799 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c984d74d4-p75q9" event={"ID":"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb","Type":"ContainerDied","Data":"3fcf822f8f294d637824d560799bdf30463e637e4ca6b7124d9907ee98f58e48"} Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.161825 4893 scope.go:117] "RemoveContainer" containerID="de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.161997 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c984d74d4-p75q9" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.163641 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.163731 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.163795 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.163896 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.163984 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qs4p\" (UniqueName: \"kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.164110 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data\") pod \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\" (UID: \"2c02b475-9db8-48a0-926c-d3e1e31be7e6\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.164436 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.164582 4893 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c02b475-9db8-48a0-926c-d3e1e31be7e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.168467 4893 generic.go:334] "Generic (PLEG): container finished" podID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerID="f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8" exitCode=0 Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.168505 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerDied","Data":"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8"} Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.168530 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c02b475-9db8-48a0-926c-d3e1e31be7e6","Type":"ContainerDied","Data":"1c51f3ff0ec725adba1b9d8a2ea8740e4fb58a7231453e1a830ddacef966ec3b"} Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.168592 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.170476 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p" (OuterVolumeSpecName: "kube-api-access-9qs4p") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "kube-api-access-9qs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.172856 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts" (OuterVolumeSpecName: "scripts") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.172880 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.181383 4893 scope.go:117] "RemoveContainer" containerID="cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.221494 4893 scope.go:117] "RemoveContainer" containerID="de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.221996 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18\": container with ID starting with de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18 not found: ID does not exist" containerID="de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.222024 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18"} err="failed to get container status \"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18\": rpc error: code = NotFound desc = could not find container \"de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18\": container with ID starting with de3e8b69f6661e1df6488573ac4edf5bc69674e8e6836442a99e608fbacbdd18 not found: ID does not exist" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.222048 4893 scope.go:117] "RemoveContainer" containerID="cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.222614 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47\": container with ID starting with cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47 not found: ID does not exist" containerID="cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.222631 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47"} err="failed to get container status \"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47\": rpc error: code = NotFound desc = could not find container \"cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47\": container with ID starting with cf4638a0c792bb76c761ab810cdacf069ec223c31a8f47729343cde7cc604c47 not found: ID does not exist" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.222644 4893 scope.go:117] "RemoveContainer" containerID="1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.226624 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.247621 4893 scope.go:117] "RemoveContainer" containerID="f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.265653 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config\") pod \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.265755 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config\") pod \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.265810 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle\") pod \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.265840 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8j6\" (UniqueName: \"kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6\") pod \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.265929 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs\") pod \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\" (UID: \"836f7dac-b3a4-4a00-bc98-868b0bbe1ebb\") " Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.266782 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.266809 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.266823 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.266836 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qs4p\" (UniqueName: \"kubernetes.io/projected/2c02b475-9db8-48a0-926c-d3e1e31be7e6-kube-api-access-9qs4p\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.270153 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" (UID: "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.271161 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6" (OuterVolumeSpecName: "kube-api-access-qr8j6") pod "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" (UID: "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb"). InnerVolumeSpecName "kube-api-access-qr8j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.272885 4893 scope.go:117] "RemoveContainer" containerID="1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.273478 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f\": container with ID starting with 1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f not found: ID does not exist" containerID="1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.273521 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f"} err="failed to get container status \"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f\": rpc error: code = NotFound desc = could not find container \"1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f\": container with ID starting with 1e024db17fe6ceb0fa55e561a7be66afecf28b3c89555643bcedb6b74569112f not found: ID does not exist" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.273551 4893 scope.go:117] "RemoveContainer" containerID="f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.274038 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8\": container with ID starting with f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8 not found: ID does not exist" containerID="f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.274111 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8"} err="failed to get container status \"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8\": rpc error: code = NotFound desc = could not find container \"f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8\": container with ID starting with f5f72a2bafc280250bb6deccd9b4f8012801b21bd285c7c7da837477941c93b8 not found: ID does not exist" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.279060 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data" (OuterVolumeSpecName: "config-data") pod "2c02b475-9db8-48a0-926c-d3e1e31be7e6" (UID: "2c02b475-9db8-48a0-926c-d3e1e31be7e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.317909 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config" (OuterVolumeSpecName: "config") pod "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" (UID: "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.324611 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" (UID: "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.336750 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" (UID: "836f7dac-b3a4-4a00-bc98-868b0bbe1ebb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369167 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369216 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369236 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr8j6\" (UniqueName: \"kubernetes.io/projected/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-kube-api-access-qr8j6\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369250 4893 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369263 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c02b475-9db8-48a0-926c-d3e1e31be7e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.369275 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.512801 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.525990 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c984d74d4-p75q9"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.534586 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.553960 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.566763 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567314 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="probe" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567334 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="probe" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567348 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-api" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567355 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-api" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567368 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="dnsmasq-dns" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567374 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="dnsmasq-dns" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567402 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="init" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567408 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="init" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567418 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-httpd" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567426 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-httpd" Jan 21 07:19:14 crc kubenswrapper[4893]: E0121 07:19:14.567440 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="cinder-scheduler" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567447 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="cinder-scheduler" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567645 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf869ba7-c70c-4a29-aab0-800fe73624c9" containerName="dnsmasq-dns" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567658 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="probe" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567687 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-api" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567700 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" containerName="neutron-httpd" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.567709 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" containerName="cinder-scheduler" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.568795 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.575002 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.601191 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.674731 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.675086 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mtg\" (UniqueName: \"kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.675126 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.675211 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.675241 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.675286 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.776811 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.776932 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.776974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mtg\" (UniqueName: \"kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.776999 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.777048 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.777063 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.778036 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.781525 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.781619 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.781751 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.793695 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.794411 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mtg\" (UniqueName: \"kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg\") pod \"cinder-scheduler-0\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.926379 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.928306 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.930550 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.931174 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xf2cj" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.935133 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.936106 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:19:14 crc kubenswrapper[4893]: I0121 07:19:14.956388 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.088301 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.088432 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll8t\" (UniqueName: \"kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.088547 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.088625 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.190241 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.190623 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.190706 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kll8t\" (UniqueName: \"kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.190812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.192300 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.207113 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.210128 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.217513 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kll8t\" (UniqueName: \"kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t\") pod \"openstackclient\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.340209 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.341539 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.356905 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.365197 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.366916 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.376053 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:15 crc kubenswrapper[4893]: E0121 07:19:15.479063 4893 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 07:19:15 crc kubenswrapper[4893]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7e33ed3d-58dd-452a-9aa1-a71636bec840_0(d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d" Netns:"/var/run/netns/dca299d0-c95f-4441-b7b2-767d80379fea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d;K8S_POD_UID=7e33ed3d-58dd-452a-9aa1-a71636bec840" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7e33ed3d-58dd-452a-9aa1-a71636bec840]: expected pod UID "7e33ed3d-58dd-452a-9aa1-a71636bec840" but got "93e402d6-b354-4755-83c3-68e43e53c19b" from Kube API Jan 21 07:19:15 crc kubenswrapper[4893]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 07:19:15 crc kubenswrapper[4893]: > Jan 21 07:19:15 crc kubenswrapper[4893]: E0121 07:19:15.479145 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 07:19:15 crc kubenswrapper[4893]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7e33ed3d-58dd-452a-9aa1-a71636bec840_0(d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d" Netns:"/var/run/netns/dca299d0-c95f-4441-b7b2-767d80379fea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d2c86c07406e60a6951021df018525002b4be0ffde270b2ffbe707de4ead1f1d;K8S_POD_UID=7e33ed3d-58dd-452a-9aa1-a71636bec840" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7e33ed3d-58dd-452a-9aa1-a71636bec840]: expected pod UID "7e33ed3d-58dd-452a-9aa1-a71636bec840" but got "93e402d6-b354-4755-83c3-68e43e53c19b" from Kube API Jan 21 07:19:15 crc kubenswrapper[4893]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 07:19:15 crc kubenswrapper[4893]: > pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.497575 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.497630 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsl9c\" (UniqueName: \"kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.497722 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.497762 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: W0121 07:19:15.542049 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4ecaeda_4211_4680_b408_cf7e4717d723.slice/crio-e04733afa92f81e4514ffbdbce2aed0c69c0a5b7788c9fc19a56b94c455304e3 WatchSource:0}: Error finding container e04733afa92f81e4514ffbdbce2aed0c69c0a5b7788c9fc19a56b94c455304e3: Status 404 returned error can't find the container with id e04733afa92f81e4514ffbdbce2aed0c69c0a5b7788c9fc19a56b94c455304e3 Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.542858 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.591450 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c02b475-9db8-48a0-926c-d3e1e31be7e6" path="/var/lib/kubelet/pods/2c02b475-9db8-48a0-926c-d3e1e31be7e6/volumes" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.592631 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836f7dac-b3a4-4a00-bc98-868b0bbe1ebb" path="/var/lib/kubelet/pods/836f7dac-b3a4-4a00-bc98-868b0bbe1ebb/volumes" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.605835 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.606189 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsl9c\" (UniqueName: \"kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.606345 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.606528 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.607079 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.614373 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.618053 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.634511 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsl9c\" (UniqueName: \"kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c\") pod \"openstackclient\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " pod="openstack/openstackclient" Jan 21 07:19:15 crc kubenswrapper[4893]: I0121 07:19:15.701388 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.194842 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.195087 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerStarted","Data":"e04733afa92f81e4514ffbdbce2aed0c69c0a5b7788c9fc19a56b94c455304e3"} Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.204370 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.209401 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7e33ed3d-58dd-452a-9aa1-a71636bec840" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.225136 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 07:19:16 crc kubenswrapper[4893]: W0121 07:19:16.225424 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e402d6_b354_4755_83c3_68e43e53c19b.slice/crio-c59b89b88484d4154638feb373806189e34c182d630c74cea3287f12a80483f3 WatchSource:0}: Error finding container c59b89b88484d4154638feb373806189e34c182d630c74cea3287f12a80483f3: Status 404 returned error can't find the container with id c59b89b88484d4154638feb373806189e34c182d630c74cea3287f12a80483f3 Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.321104 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle\") pod \"7e33ed3d-58dd-452a-9aa1-a71636bec840\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.321498 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config\") pod \"7e33ed3d-58dd-452a-9aa1-a71636bec840\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.321594 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kll8t\" (UniqueName: \"kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t\") pod \"7e33ed3d-58dd-452a-9aa1-a71636bec840\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.321698 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret\") pod \"7e33ed3d-58dd-452a-9aa1-a71636bec840\" (UID: \"7e33ed3d-58dd-452a-9aa1-a71636bec840\") " Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.322123 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7e33ed3d-58dd-452a-9aa1-a71636bec840" (UID: "7e33ed3d-58dd-452a-9aa1-a71636bec840"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.322298 4893 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.327845 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7e33ed3d-58dd-452a-9aa1-a71636bec840" (UID: "7e33ed3d-58dd-452a-9aa1-a71636bec840"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.327917 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e33ed3d-58dd-452a-9aa1-a71636bec840" (UID: "7e33ed3d-58dd-452a-9aa1-a71636bec840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.327927 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t" (OuterVolumeSpecName: "kube-api-access-kll8t") pod "7e33ed3d-58dd-452a-9aa1-a71636bec840" (UID: "7e33ed3d-58dd-452a-9aa1-a71636bec840"). InnerVolumeSpecName "kube-api-access-kll8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.424540 4893 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.424586 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e33ed3d-58dd-452a-9aa1-a71636bec840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.424601 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kll8t\" (UniqueName: \"kubernetes.io/projected/7e33ed3d-58dd-452a-9aa1-a71636bec840-kube-api-access-kll8t\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:16 crc kubenswrapper[4893]: I0121 07:19:16.709900 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.205509 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"93e402d6-b354-4755-83c3-68e43e53c19b","Type":"ContainerStarted","Data":"c59b89b88484d4154638feb373806189e34c182d630c74cea3287f12a80483f3"} Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.209115 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.210120 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerStarted","Data":"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc"} Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.210182 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerStarted","Data":"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe"} Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.238653 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7e33ed3d-58dd-452a-9aa1-a71636bec840" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.241727 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.241699616 podStartE2EDuration="3.241699616s" podCreationTimestamp="2026-01-21 07:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:17.233783036 +0000 UTC m=+1498.464128938" watchObservedRunningTime="2026-01-21 07:19:17.241699616 +0000 UTC m=+1498.472045518" Jan 21 07:19:17 crc kubenswrapper[4893]: I0121 07:19:17.600034 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e33ed3d-58dd-452a-9aa1-a71636bec840" path="/var/lib/kubelet/pods/7e33ed3d-58dd-452a-9aa1-a71636bec840/volumes" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.169631 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.172835 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.174815 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.175781 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.176990 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.186306 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.302860 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.303844 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.303947 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.303982 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.304130 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4bcx\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.304172 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.304302 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.304337 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.406497 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.406890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.407045 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4bcx\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.407583 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.408402 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.408887 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.409046 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.409279 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.409406 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.409497 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.416828 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.417023 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.417281 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.420009 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.420191 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.432556 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4bcx\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx\") pod \"swift-proxy-5795cc4cb5-6bsp7\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.495440 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:19 crc kubenswrapper[4893]: I0121 07:19:19.937704 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 07:19:20 crc kubenswrapper[4893]: I0121 07:19:20.086300 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:19:20 crc kubenswrapper[4893]: I0121 07:19:20.240213 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerStarted","Data":"80f8d8c3593fc890ffc40a914ae8ca1adfd69137113a1ca85ee6741d35d70488"} Jan 21 07:19:21 crc kubenswrapper[4893]: I0121 07:19:21.252253 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerStarted","Data":"ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf"} Jan 21 07:19:21 crc kubenswrapper[4893]: I0121 07:19:21.252552 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:21 crc kubenswrapper[4893]: I0121 07:19:21.252566 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:21 crc kubenswrapper[4893]: I0121 07:19:21.252574 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerStarted","Data":"18e9d45b37e8d84945f0132ccb26b8b828ad2ef4ebd71d0f862ce04dc0922db6"} Jan 21 07:19:21 crc kubenswrapper[4893]: I0121 07:19:21.284457 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" podStartSLOduration=2.284432065 podStartE2EDuration="2.284432065s" podCreationTimestamp="2026-01-21 07:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:21.273530618 +0000 UTC m=+1502.503876520" watchObservedRunningTime="2026-01-21 07:19:21.284432065 +0000 UTC m=+1502.514777967" Jan 21 07:19:25 crc kubenswrapper[4893]: I0121 07:19:25.186446 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 07:19:25 crc kubenswrapper[4893]: I0121 07:19:25.327594 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerDied","Data":"b31663f1992e2af6945268e98496bc994813d8dc47da9902cf409a585e463f12"} Jan 21 07:19:25 crc kubenswrapper[4893]: I0121 07:19:25.327555 4893 generic.go:334] "Generic (PLEG): container finished" podID="12e11571-a021-4df2-a0da-69f56335a8c8" containerID="b31663f1992e2af6945268e98496bc994813d8dc47da9902cf409a585e463f12" exitCode=137 Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.115157 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222700 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222858 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqthd\" (UniqueName: \"kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222884 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222952 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222967 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.222998 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.223053 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml\") pod \"12e11571-a021-4df2-a0da-69f56335a8c8\" (UID: \"12e11571-a021-4df2-a0da-69f56335a8c8\") " Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.224395 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.224541 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.228814 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts" (OuterVolumeSpecName: "scripts") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.228896 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd" (OuterVolumeSpecName: "kube-api-access-sqthd") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "kube-api-access-sqthd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.251977 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.270359 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.291180 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data" (OuterVolumeSpecName: "config-data") pod "12e11571-a021-4df2-a0da-69f56335a8c8" (UID: "12e11571-a021-4df2-a0da-69f56335a8c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325268 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325311 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqthd\" (UniqueName: \"kubernetes.io/projected/12e11571-a021-4df2-a0da-69f56335a8c8-kube-api-access-sqthd\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325322 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325330 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325338 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325347 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12e11571-a021-4df2-a0da-69f56335a8c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.325355 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12e11571-a021-4df2-a0da-69f56335a8c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.364640 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"93e402d6-b354-4755-83c3-68e43e53c19b","Type":"ContainerStarted","Data":"8efe8fb8b75568eba645314bce31b548eb596cda1bd127a11deb8d7d4c539845"} Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.369548 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12e11571-a021-4df2-a0da-69f56335a8c8","Type":"ContainerDied","Data":"cd946dadd2c547b19b1c419054a0df8bf9ac0fae659eced9d7442cabb16fe2f3"} Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.369700 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.370758 4893 scope.go:117] "RemoveContainer" containerID="b31663f1992e2af6945268e98496bc994813d8dc47da9902cf409a585e463f12" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.389392 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.85172391 podStartE2EDuration="13.389357347s" podCreationTimestamp="2026-01-21 07:19:15 +0000 UTC" firstStartedPulling="2026-01-21 07:19:16.228092529 +0000 UTC m=+1497.458438441" lastFinishedPulling="2026-01-21 07:19:27.765725976 +0000 UTC m=+1508.996071878" observedRunningTime="2026-01-21 07:19:28.384247449 +0000 UTC m=+1509.614593351" watchObservedRunningTime="2026-01-21 07:19:28.389357347 +0000 UTC m=+1509.619703249" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.408315 4893 scope.go:117] "RemoveContainer" containerID="38f2063ef2f3f9534be2a583dd44bdf6e6c92f12136106182a4695d712b1d793" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.436504 4893 scope.go:117] "RemoveContainer" containerID="b8ee096a91a103ddcd42ac9bac6ae684868e8a9bc43404e2173501e612512c0c" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.441284 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.451648 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.488656 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:28 crc kubenswrapper[4893]: E0121 07:19:28.489481 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="ceilometer-notification-agent" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.489504 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="ceilometer-notification-agent" Jan 21 07:19:28 crc kubenswrapper[4893]: E0121 07:19:28.489516 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="proxy-httpd" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.489524 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="proxy-httpd" Jan 21 07:19:28 crc kubenswrapper[4893]: E0121 07:19:28.489588 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="sg-core" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.489597 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="sg-core" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.490413 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="ceilometer-notification-agent" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.490457 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="sg-core" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.490473 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" containerName="proxy-httpd" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.511099 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.511258 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.514430 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.517332 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.635996 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636061 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc4vr\" (UniqueName: \"kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636094 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636116 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636184 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.636338 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738376 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738433 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738527 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738555 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc4vr\" (UniqueName: \"kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738584 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738616 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.738685 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.739663 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.741022 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.743400 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.745774 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.746152 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.750268 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.757342 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc4vr\" (UniqueName: \"kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr\") pod \"ceilometer-0\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.836364 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:28 crc kubenswrapper[4893]: I0121 07:19:28.863053 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:29 crc kubenswrapper[4893]: I0121 07:19:29.376756 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:29 crc kubenswrapper[4893]: W0121 07:19:29.383893 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff32aa7_cc5f_4ff3_a847_0705c9798386.slice/crio-6feb76e75a5045c1e0c5b0c21fec6c9eef9b71d8853ef5de61eb24bb83bdc6f4 WatchSource:0}: Error finding container 6feb76e75a5045c1e0c5b0c21fec6c9eef9b71d8853ef5de61eb24bb83bdc6f4: Status 404 returned error can't find the container with id 6feb76e75a5045c1e0c5b0c21fec6c9eef9b71d8853ef5de61eb24bb83bdc6f4 Jan 21 07:19:29 crc kubenswrapper[4893]: I0121 07:19:29.503163 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:29 crc kubenswrapper[4893]: I0121 07:19:29.503546 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:19:29 crc kubenswrapper[4893]: I0121 07:19:29.702617 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e11571-a021-4df2-a0da-69f56335a8c8" path="/var/lib/kubelet/pods/12e11571-a021-4df2-a0da-69f56335a8c8/volumes" Jan 21 07:19:30 crc kubenswrapper[4893]: I0121 07:19:30.393974 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerStarted","Data":"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655"} Jan 21 07:19:30 crc kubenswrapper[4893]: I0121 07:19:30.394314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerStarted","Data":"6feb76e75a5045c1e0c5b0c21fec6c9eef9b71d8853ef5de61eb24bb83bdc6f4"} Jan 21 07:19:31 crc kubenswrapper[4893]: I0121 07:19:31.407811 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerStarted","Data":"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236"} Jan 21 07:19:32 crc kubenswrapper[4893]: I0121 07:19:32.418216 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerStarted","Data":"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0"} Jan 21 07:19:33 crc kubenswrapper[4893]: I0121 07:19:33.879215 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jppdg"] Jan 21 07:19:33 crc kubenswrapper[4893]: I0121 07:19:33.881067 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:33 crc kubenswrapper[4893]: I0121 07:19:33.887426 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jppdg"] Jan 21 07:19:33 crc kubenswrapper[4893]: I0121 07:19:33.983964 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nzf4p"] Jan 21 07:19:33 crc kubenswrapper[4893]: I0121 07:19:33.989067 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.003549 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nzf4p"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.056303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwz7\" (UniqueName: \"kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.056385 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.082088 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hjxfg"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.083259 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.101120 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d4ab-account-create-update-m77l4"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.102264 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.112105 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.126758 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-m77l4"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.135714 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hjxfg"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.158138 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.158237 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtljn\" (UniqueName: \"kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.158297 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwz7\" (UniqueName: \"kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.158351 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.159510 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.191542 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwz7\" (UniqueName: \"kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7\") pod \"nova-api-db-create-jppdg\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.200389 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.261833 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.262239 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.262405 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.262614 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.262574 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtljn\" (UniqueName: \"kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.262964 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqqpl\" (UniqueName: \"kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.263058 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98hv\" (UniqueName: \"kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.293192 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtljn\" (UniqueName: \"kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn\") pod \"nova-cell0-db-create-nzf4p\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.312943 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-daab-account-create-update-c75q5"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.313347 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.314989 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.318449 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.331695 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-daab-account-create-update-c75q5"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.368658 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98hv\" (UniqueName: \"kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.368907 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.368936 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.369065 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqqpl\" (UniqueName: \"kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.370307 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.376308 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.398952 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqqpl\" (UniqueName: \"kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl\") pod \"nova-api-d4ab-account-create-update-m77l4\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.399681 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f862-account-create-update-wdbxf"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.400942 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98hv\" (UniqueName: \"kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv\") pod \"nova-cell1-db-create-hjxfg\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.401613 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.410252 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.417137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.431847 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f862-account-create-update-wdbxf"] Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.456298 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.487617 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpn78\" (UniqueName: \"kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.491192 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksdb\" (UniqueName: \"kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.491373 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.491471 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.598025 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpn78\" (UniqueName: \"kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.598099 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksdb\" (UniqueName: \"kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.598142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.598170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.599274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.603002 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.633517 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksdb\" (UniqueName: \"kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb\") pod \"nova-cell0-daab-account-create-update-c75q5\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.633527 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpn78\" (UniqueName: \"kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78\") pod \"nova-cell1-f862-account-create-update-wdbxf\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.807622 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.818326 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jppdg"] Jan 21 07:19:34 crc kubenswrapper[4893]: W0121 07:19:34.821020 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa87df83_9b5d_4ca4_9fd1_b16a3a6cc31e.slice/crio-09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9 WatchSource:0}: Error finding container 09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9: Status 404 returned error can't find the container with id 09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9 Jan 21 07:19:34 crc kubenswrapper[4893]: I0121 07:19:34.848304 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.047328 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hjxfg"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.068194 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nzf4p"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.179936 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-m77l4"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.489749 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-daab-account-create-update-c75q5"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.510506 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f862-account-create-update-wdbxf"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.537376 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hjxfg" event={"ID":"5ca5c8f3-90dc-4b2e-989d-0448280e2e48","Type":"ContainerStarted","Data":"df3f218713a1a3e87462926b70814e835ae16e43f96d8ac34ef178ec92248ed4"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.537605 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hjxfg" event={"ID":"5ca5c8f3-90dc-4b2e-989d-0448280e2e48","Type":"ContainerStarted","Data":"ac8cda60eb4d146bb05301fb46260d1751e8a994582c7c4378544ed2c2b1679f"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.552016 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jppdg" event={"ID":"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e","Type":"ContainerStarted","Data":"3cf08623b3d4ea248b91b7f0ba462c043519e8b8478a0fd43d26a24fe8b82d50"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.552057 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jppdg" event={"ID":"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e","Type":"ContainerStarted","Data":"09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.568950 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzf4p" event={"ID":"9f9e755b-1a33-409b-b4b7-926bcfecb0b5","Type":"ContainerStarted","Data":"803e737ec89f22da8cda456dd4be3042d27b3037fcf45d2411691307415834c7"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.588314 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-hjxfg" podStartSLOduration=1.588275938 podStartE2EDuration="1.588275938s" podCreationTimestamp="2026-01-21 07:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:35.569346518 +0000 UTC m=+1516.799692420" watchObservedRunningTime="2026-01-21 07:19:35.588275938 +0000 UTC m=+1516.818621840" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.596327 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-central-agent" containerID="cri-o://0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655" gracePeriod=30 Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.596578 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="proxy-httpd" containerID="cri-o://b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955" gracePeriod=30 Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.596644 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="sg-core" containerID="cri-o://a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0" gracePeriod=30 Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.596715 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-notification-agent" containerID="cri-o://8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236" gracePeriod=30 Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.608046 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-jppdg" podStartSLOduration=2.608025231 podStartE2EDuration="2.608025231s" podCreationTimestamp="2026-01-21 07:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:35.597218387 +0000 UTC m=+1516.827564289" watchObservedRunningTime="2026-01-21 07:19:35.608025231 +0000 UTC m=+1516.838371133" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.616797 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.616844 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerStarted","Data":"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.616866 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4ab-account-create-update-m77l4" event={"ID":"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1","Type":"ContainerStarted","Data":"58e2f5120f235d627f15bd6928f59ff80ac9c471c1db8a255162776b2da8e101"} Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.626910 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.753207478 podStartE2EDuration="7.626887889s" podCreationTimestamp="2026-01-21 07:19:28 +0000 UTC" firstStartedPulling="2026-01-21 07:19:29.389120103 +0000 UTC m=+1510.619466025" lastFinishedPulling="2026-01-21 07:19:34.262800514 +0000 UTC m=+1515.493146436" observedRunningTime="2026-01-21 07:19:35.625109447 +0000 UTC m=+1516.855455349" watchObservedRunningTime="2026-01-21 07:19:35.626887889 +0000 UTC m=+1516.857233791" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.651160 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d4ab-account-create-update-m77l4" podStartSLOduration=1.6511367030000001 podStartE2EDuration="1.651136703s" podCreationTimestamp="2026-01-21 07:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:35.639074623 +0000 UTC m=+1516.869420525" watchObservedRunningTime="2026-01-21 07:19:35.651136703 +0000 UTC m=+1516.881482605" Jan 21 07:19:35 crc kubenswrapper[4893]: E0121 07:19:35.799625 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa87df83_9b5d_4ca4_9fd1_b16a3a6cc31e.slice/crio-3cf08623b3d4ea248b91b7f0ba462c043519e8b8478a0fd43d26a24fe8b82d50.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.851464 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.851835 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-log" containerID="cri-o://1c87cf0b9edbf49255836fb9a7097f437e002d4ccb33505e57a761bc1b4b2e74" gracePeriod=30 Jan 21 07:19:35 crc kubenswrapper[4893]: I0121 07:19:35.852020 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-httpd" containerID="cri-o://919e654e4d670ba6dd7cf0ef844b18f58a16ea771df0a60b1a7a9136cabb10a9" gracePeriod=30 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.623772 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerID="b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.624083 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerID="a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0" exitCode=2 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.624097 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerID="8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.623841 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerDied","Data":"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.624218 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerDied","Data":"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.624240 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerDied","Data":"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.628111 4893 generic.go:334] "Generic (PLEG): container finished" podID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerID="1c87cf0b9edbf49255836fb9a7097f437e002d4ccb33505e57a761bc1b4b2e74" exitCode=143 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.628179 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerDied","Data":"1c87cf0b9edbf49255836fb9a7097f437e002d4ccb33505e57a761bc1b4b2e74"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.629845 4893 generic.go:334] "Generic (PLEG): container finished" podID="c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" containerID="e18a345419c29dccee9610df29465e9c2c633cf97f34213f1d0002603db57da4" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.629899 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4ab-account-create-update-m77l4" event={"ID":"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1","Type":"ContainerDied","Data":"e18a345419c29dccee9610df29465e9c2c633cf97f34213f1d0002603db57da4"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.631217 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ab997d7-7cc4-49d1-bb60-9459e0d838e2" containerID="eab6c0154d8e24cc89497e5f14cf2a2d8e6fbe8bbe8e3773a8b5ef15313adcf2" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.631264 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" event={"ID":"5ab997d7-7cc4-49d1-bb60-9459e0d838e2","Type":"ContainerDied","Data":"eab6c0154d8e24cc89497e5f14cf2a2d8e6fbe8bbe8e3773a8b5ef15313adcf2"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.631283 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" event={"ID":"5ab997d7-7cc4-49d1-bb60-9459e0d838e2","Type":"ContainerStarted","Data":"a84bb62e4d76daa223d2db801aab4f0c65861cbcc2ffb60a60b6468956bdd79f"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.632639 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ca5c8f3-90dc-4b2e-989d-0448280e2e48" containerID="df3f218713a1a3e87462926b70814e835ae16e43f96d8ac34ef178ec92248ed4" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.632706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hjxfg" event={"ID":"5ca5c8f3-90dc-4b2e-989d-0448280e2e48","Type":"ContainerDied","Data":"df3f218713a1a3e87462926b70814e835ae16e43f96d8ac34ef178ec92248ed4"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.635616 4893 generic.go:334] "Generic (PLEG): container finished" podID="aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" containerID="3cf08623b3d4ea248b91b7f0ba462c043519e8b8478a0fd43d26a24fe8b82d50" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.635724 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jppdg" event={"ID":"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e","Type":"ContainerDied","Data":"3cf08623b3d4ea248b91b7f0ba462c043519e8b8478a0fd43d26a24fe8b82d50"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.638000 4893 generic.go:334] "Generic (PLEG): container finished" podID="9f9e755b-1a33-409b-b4b7-926bcfecb0b5" containerID="66ea28ad52999a6b6a22c95a9e03aa8010e494c9ef28ed1353dec5ea9b2e0e67" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.638048 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzf4p" event={"ID":"9f9e755b-1a33-409b-b4b7-926bcfecb0b5","Type":"ContainerDied","Data":"66ea28ad52999a6b6a22c95a9e03aa8010e494c9ef28ed1353dec5ea9b2e0e67"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.639286 4893 generic.go:334] "Generic (PLEG): container finished" podID="4143e0f4-e3d5-44f7-aafd-38f977694010" containerID="1d916db55c52fdd79903dfcda989284aff74e0a176d6b57776961e238500c55c" exitCode=0 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.639313 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-daab-account-create-update-c75q5" event={"ID":"4143e0f4-e3d5-44f7-aafd-38f977694010","Type":"ContainerDied","Data":"1d916db55c52fdd79903dfcda989284aff74e0a176d6b57776961e238500c55c"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.639326 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-daab-account-create-update-c75q5" event={"ID":"4143e0f4-e3d5-44f7-aafd-38f977694010","Type":"ContainerStarted","Data":"ba592cb4bb0553eea9c1ee8b3b6067612a9a5a6bb637e3b14ce42b34c6792bdc"} Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.850661 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.851008 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-log" containerID="cri-o://974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c" gracePeriod=30 Jan 21 07:19:36 crc kubenswrapper[4893]: I0121 07:19:36.851322 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-httpd" containerID="cri-o://a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7" gracePeriod=30 Jan 21 07:19:37 crc kubenswrapper[4893]: I0121 07:19:37.663947 4893 generic.go:334] "Generic (PLEG): container finished" podID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerID="974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c" exitCode=143 Jan 21 07:19:37 crc kubenswrapper[4893]: I0121 07:19:37.664146 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerDied","Data":"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.102385 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.117355 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q98hv\" (UniqueName: \"kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv\") pod \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.117439 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts\") pod \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\" (UID: \"5ca5c8f3-90dc-4b2e-989d-0448280e2e48\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.118508 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ca5c8f3-90dc-4b2e-989d-0448280e2e48" (UID: "5ca5c8f3-90dc-4b2e-989d-0448280e2e48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.139014 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv" (OuterVolumeSpecName: "kube-api-access-q98hv") pod "5ca5c8f3-90dc-4b2e-989d-0448280e2e48" (UID: "5ca5c8f3-90dc-4b2e-989d-0448280e2e48"). InnerVolumeSpecName "kube-api-access-q98hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.223723 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.223759 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q98hv\" (UniqueName: \"kubernetes.io/projected/5ca5c8f3-90dc-4b2e-989d-0448280e2e48-kube-api-access-q98hv\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.397604 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.431537 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ksdb\" (UniqueName: \"kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb\") pod \"4143e0f4-e3d5-44f7-aafd-38f977694010\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.431933 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts\") pod \"4143e0f4-e3d5-44f7-aafd-38f977694010\" (UID: \"4143e0f4-e3d5-44f7-aafd-38f977694010\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.432528 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4143e0f4-e3d5-44f7-aafd-38f977694010" (UID: "4143e0f4-e3d5-44f7-aafd-38f977694010"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.432867 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4143e0f4-e3d5-44f7-aafd-38f977694010-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.437726 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.449914 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb" (OuterVolumeSpecName: "kube-api-access-9ksdb") pod "4143e0f4-e3d5-44f7-aafd-38f977694010" (UID: "4143e0f4-e3d5-44f7-aafd-38f977694010"). InnerVolumeSpecName "kube-api-access-9ksdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.475917 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.497515 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.511056 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534654 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts\") pod \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534755 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts\") pod \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534794 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts\") pod \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534870 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwz7\" (UniqueName: \"kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7\") pod \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\" (UID: \"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534954 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtljn\" (UniqueName: \"kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn\") pod \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\" (UID: \"9f9e755b-1a33-409b-b4b7-926bcfecb0b5\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.534985 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpn78\" (UniqueName: \"kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78\") pod \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\" (UID: \"5ab997d7-7cc4-49d1-bb60-9459e0d838e2\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.535017 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqqpl\" (UniqueName: \"kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl\") pod \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.535058 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts\") pod \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\" (UID: \"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1\") " Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.535776 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f9e755b-1a33-409b-b4b7-926bcfecb0b5" (UID: "9f9e755b-1a33-409b-b4b7-926bcfecb0b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.535924 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" (UID: "aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.535920 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ab997d7-7cc4-49d1-bb60-9459e0d838e2" (UID: "5ab997d7-7cc4-49d1-bb60-9459e0d838e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536147 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" (UID: "c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536814 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536838 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536849 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536860 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ksdb\" (UniqueName: \"kubernetes.io/projected/4143e0f4-e3d5-44f7-aafd-38f977694010-kube-api-access-9ksdb\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.536870 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.546921 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn" (OuterVolumeSpecName: "kube-api-access-wtljn") pod "9f9e755b-1a33-409b-b4b7-926bcfecb0b5" (UID: "9f9e755b-1a33-409b-b4b7-926bcfecb0b5"). InnerVolumeSpecName "kube-api-access-wtljn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.546996 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78" (OuterVolumeSpecName: "kube-api-access-jpn78") pod "5ab997d7-7cc4-49d1-bb60-9459e0d838e2" (UID: "5ab997d7-7cc4-49d1-bb60-9459e0d838e2"). InnerVolumeSpecName "kube-api-access-jpn78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.547082 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl" (OuterVolumeSpecName: "kube-api-access-jqqpl") pod "c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" (UID: "c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1"). InnerVolumeSpecName "kube-api-access-jqqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.553765 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7" (OuterVolumeSpecName: "kube-api-access-4pwz7") pod "aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" (UID: "aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e"). InnerVolumeSpecName "kube-api-access-4pwz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.639244 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwz7\" (UniqueName: \"kubernetes.io/projected/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e-kube-api-access-4pwz7\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.639279 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtljn\" (UniqueName: \"kubernetes.io/projected/9f9e755b-1a33-409b-b4b7-926bcfecb0b5-kube-api-access-wtljn\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.639295 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpn78\" (UniqueName: \"kubernetes.io/projected/5ab997d7-7cc4-49d1-bb60-9459e0d838e2-kube-api-access-jpn78\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.639303 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqqpl\" (UniqueName: \"kubernetes.io/projected/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1-kube-api-access-jqqpl\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.674393 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.674392 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f862-account-create-update-wdbxf" event={"ID":"5ab997d7-7cc4-49d1-bb60-9459e0d838e2","Type":"ContainerDied","Data":"a84bb62e4d76daa223d2db801aab4f0c65861cbcc2ffb60a60b6468956bdd79f"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.674539 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84bb62e4d76daa223d2db801aab4f0c65861cbcc2ffb60a60b6468956bdd79f" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.676573 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hjxfg" event={"ID":"5ca5c8f3-90dc-4b2e-989d-0448280e2e48","Type":"ContainerDied","Data":"ac8cda60eb4d146bb05301fb46260d1751e8a994582c7c4378544ed2c2b1679f"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.676702 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac8cda60eb4d146bb05301fb46260d1751e8a994582c7c4378544ed2c2b1679f" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.676827 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hjxfg" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.678406 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jppdg" event={"ID":"aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e","Type":"ContainerDied","Data":"09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.678440 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f7a5166f983bb7323e3eadf2f056788f8fcf1f5ccd8fd27cec7948e480b5f9" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.678506 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jppdg" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.680273 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzf4p" event={"ID":"9f9e755b-1a33-409b-b4b7-926bcfecb0b5","Type":"ContainerDied","Data":"803e737ec89f22da8cda456dd4be3042d27b3037fcf45d2411691307415834c7"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.680318 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803e737ec89f22da8cda456dd4be3042d27b3037fcf45d2411691307415834c7" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.680396 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzf4p" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.685376 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-daab-account-create-update-c75q5" event={"ID":"4143e0f4-e3d5-44f7-aafd-38f977694010","Type":"ContainerDied","Data":"ba592cb4bb0553eea9c1ee8b3b6067612a9a5a6bb637e3b14ce42b34c6792bdc"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.685409 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba592cb4bb0553eea9c1ee8b3b6067612a9a5a6bb637e3b14ce42b34c6792bdc" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.685457 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-daab-account-create-update-c75q5" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.687545 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4ab-account-create-update-m77l4" event={"ID":"c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1","Type":"ContainerDied","Data":"58e2f5120f235d627f15bd6928f59ff80ac9c471c1db8a255162776b2da8e101"} Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.687568 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e2f5120f235d627f15bd6928f59ff80ac9c471c1db8a255162776b2da8e101" Jan 21 07:19:38 crc kubenswrapper[4893]: I0121 07:19:38.687605 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-m77l4" Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.799988 4893 generic.go:334] "Generic (PLEG): container finished" podID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerID="919e654e4d670ba6dd7cf0ef844b18f58a16ea771df0a60b1a7a9136cabb10a9" exitCode=0 Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.800022 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerDied","Data":"919e654e4d670ba6dd7cf0ef844b18f58a16ea771df0a60b1a7a9136cabb10a9"} Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.921116 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.983950 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984021 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtt65\" (UniqueName: \"kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984127 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984156 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984285 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984340 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.984406 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data\") pod \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\" (UID: \"5c301aea-eebc-47b8-9b2d-1feeeaf939d5\") " Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.991273 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:39 crc kubenswrapper[4893]: I0121 07:19:39.994159 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs" (OuterVolumeSpecName: "logs") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.000916 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.001666 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65" (OuterVolumeSpecName: "kube-api-access-jtt65") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "kube-api-access-jtt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.007290 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts" (OuterVolumeSpecName: "scripts") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.069434 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.083863 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090514 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtt65\" (UniqueName: \"kubernetes.io/projected/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-kube-api-access-jtt65\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090547 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090561 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090568 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090579 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090587 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.090618 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.109983 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.118709 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data" (OuterVolumeSpecName: "config-data") pod "5c301aea-eebc-47b8-9b2d-1feeeaf939d5" (UID: "5c301aea-eebc-47b8-9b2d-1feeeaf939d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.192310 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c301aea-eebc-47b8-9b2d-1feeeaf939d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.192354 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.544786 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.599414 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.599979 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwp8p\" (UniqueName: \"kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600010 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600044 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600117 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600218 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600248 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.600310 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts\") pod \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\" (UID: \"0722c0f6-3b88-4b57-bb19-6e63f97b5392\") " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.601045 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.601191 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs" (OuterVolumeSpecName: "logs") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.607627 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.610178 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p" (OuterVolumeSpecName: "kube-api-access-vwp8p") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "kube-api-access-vwp8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.617972 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts" (OuterVolumeSpecName: "scripts") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.634853 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.687104 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data" (OuterVolumeSpecName: "config-data") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.695924 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0722c0f6-3b88-4b57-bb19-6e63f97b5392" (UID: "0722c0f6-3b88-4b57-bb19-6e63f97b5392"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.702921 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.702964 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.702980 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.702993 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.703007 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.703021 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwp8p\" (UniqueName: \"kubernetes.io/projected/0722c0f6-3b88-4b57-bb19-6e63f97b5392-kube-api-access-vwp8p\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.703036 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0722c0f6-3b88-4b57-bb19-6e63f97b5392-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.703047 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0722c0f6-3b88-4b57-bb19-6e63f97b5392-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.728500 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.804353 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.811782 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c301aea-eebc-47b8-9b2d-1feeeaf939d5","Type":"ContainerDied","Data":"a85a7a069fa562f294b454b9296ab09d664da5a688536b6cd1874af7563b896d"} Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.811834 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.811852 4893 scope.go:117] "RemoveContainer" containerID="919e654e4d670ba6dd7cf0ef844b18f58a16ea771df0a60b1a7a9136cabb10a9" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.814351 4893 generic.go:334] "Generic (PLEG): container finished" podID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerID="a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7" exitCode=0 Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.814387 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerDied","Data":"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7"} Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.814413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0722c0f6-3b88-4b57-bb19-6e63f97b5392","Type":"ContainerDied","Data":"6d5dd010e442bfc1b68a5c0591b8a00bad8e597e3479c615a70c04557f4b9128"} Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.814494 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.838412 4893 scope.go:117] "RemoveContainer" containerID="1c87cf0b9edbf49255836fb9a7097f437e002d4ccb33505e57a761bc1b4b2e74" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.858978 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.877258 4893 scope.go:117] "RemoveContainer" containerID="a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.877385 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.886750 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.898727 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.921713 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922128 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca5c8f3-90dc-4b2e-989d-0448280e2e48" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922150 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca5c8f3-90dc-4b2e-989d-0448280e2e48" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922167 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab997d7-7cc4-49d1-bb60-9459e0d838e2" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922178 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab997d7-7cc4-49d1-bb60-9459e0d838e2" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922196 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4143e0f4-e3d5-44f7-aafd-38f977694010" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922204 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4143e0f4-e3d5-44f7-aafd-38f977694010" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922224 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922232 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922240 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922247 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922257 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922264 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922281 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922289 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922299 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922305 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922313 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922318 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: E0121 07:19:40.922335 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9e755b-1a33-409b-b4b7-926bcfecb0b5" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922340 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9e755b-1a33-409b-b4b7-926bcfecb0b5" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922498 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922508 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922515 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca5c8f3-90dc-4b2e-989d-0448280e2e48" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922524 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922535 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" containerName="glance-log" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922546 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9e755b-1a33-409b-b4b7-926bcfecb0b5" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922557 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" containerName="mariadb-database-create" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922570 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab997d7-7cc4-49d1-bb60-9459e0d838e2" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922583 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" containerName="glance-httpd" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.922595 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4143e0f4-e3d5-44f7-aafd-38f977694010" containerName="mariadb-account-create-update" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.923528 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.926551 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v2k8v" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.926853 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.927129 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.927413 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.939157 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.959532 4893 scope.go:117] "RemoveContainer" containerID="974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.966983 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.970737 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.976575 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 07:19:40 crc kubenswrapper[4893]: I0121 07:19:40.976855 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.008398 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.034868 4893 scope.go:117] "RemoveContainer" containerID="a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7" Jan 21 07:19:41 crc kubenswrapper[4893]: E0121 07:19:41.035461 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7\": container with ID starting with a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7 not found: ID does not exist" containerID="a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.035507 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7"} err="failed to get container status \"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7\": rpc error: code = NotFound desc = could not find container \"a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7\": container with ID starting with a365c989b5d72dff15391ba4d4bc3984ecfd4fd14f666a8c5eb497795f13ddc7 not found: ID does not exist" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.035531 4893 scope.go:117] "RemoveContainer" containerID="974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c" Jan 21 07:19:41 crc kubenswrapper[4893]: E0121 07:19:41.044399 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c\": container with ID starting with 974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c not found: ID does not exist" containerID="974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.044477 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c"} err="failed to get container status \"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c\": rpc error: code = NotFound desc = could not find container \"974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c\": container with ID starting with 974c16498e433a77cf1fb9b03c4995a17ba87810adf570b8dd48cbb56941192c not found: ID does not exist" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112459 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112660 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112723 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112758 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112803 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112823 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112887 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.112994 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113046 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpx8\" (UniqueName: \"kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113159 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113189 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113227 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113286 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wc9\" (UniqueName: \"kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113334 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113348 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.113386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215601 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215685 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215705 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215726 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215751 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215768 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215790 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215826 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215848 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpx8\" (UniqueName: \"kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215892 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215912 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215941 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215967 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8wc9\" (UniqueName: \"kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.215992 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.216009 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.216033 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.216279 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.216829 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.218900 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.219106 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.219623 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.220135 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.220405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.220409 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.221098 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.222585 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.236381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.240204 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.240353 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.240715 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.252470 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpx8\" (UniqueName: \"kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.252685 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8wc9\" (UniqueName: \"kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.277537 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.295243 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.316768 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.317156 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.601644 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0722c0f6-3b88-4b57-bb19-6e63f97b5392" path="/var/lib/kubelet/pods/0722c0f6-3b88-4b57-bb19-6e63f97b5392/volumes" Jan 21 07:19:41 crc kubenswrapper[4893]: I0121 07:19:41.602382 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c301aea-eebc-47b8-9b2d-1feeeaf939d5" path="/var/lib/kubelet/pods/5c301aea-eebc-47b8-9b2d-1feeeaf939d5/volumes" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.247577 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.315224 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.338547 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:19:42 crc kubenswrapper[4893]: W0121 07:19:42.353778 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45545422_414a_433a_9de9_fbfb6e03add3.slice/crio-7534b26bb4c70a70d3098f80c9972d4edc0df1425f39e144f1dba1662c2f2182 WatchSource:0}: Error finding container 7534b26bb4c70a70d3098f80c9972d4edc0df1425f39e144f1dba1662c2f2182: Status 404 returned error can't find the container with id 7534b26bb4c70a70d3098f80c9972d4edc0df1425f39e144f1dba1662c2f2182 Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455415 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455562 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455625 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455702 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455745 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455779 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc4vr\" (UniqueName: \"kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.455819 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml\") pod \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\" (UID: \"2ff32aa7-cc5f-4ff3-a847-0705c9798386\") " Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.456573 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.456839 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.466839 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts" (OuterVolumeSpecName: "scripts") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.467295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr" (OuterVolumeSpecName: "kube-api-access-cc4vr") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "kube-api-access-cc4vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.503010 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.560908 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.560950 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.560962 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ff32aa7-cc5f-4ff3-a847-0705c9798386-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.560979 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc4vr\" (UniqueName: \"kubernetes.io/projected/2ff32aa7-cc5f-4ff3-a847-0705c9798386-kube-api-access-cc4vr\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.560991 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.561725 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data" (OuterVolumeSpecName: "config-data") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.562789 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ff32aa7-cc5f-4ff3-a847-0705c9798386" (UID: "2ff32aa7-cc5f-4ff3-a847-0705c9798386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.662411 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.662441 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff32aa7-cc5f-4ff3-a847-0705c9798386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.853158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerStarted","Data":"b36eeeb0948f38b40e09fa379e2cdbecc5a9f1128c7e16702611b396f1fd5337"} Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.861327 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerID="0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655" exitCode=0 Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.861415 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerDied","Data":"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655"} Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.861455 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ff32aa7-cc5f-4ff3-a847-0705c9798386","Type":"ContainerDied","Data":"6feb76e75a5045c1e0c5b0c21fec6c9eef9b71d8853ef5de61eb24bb83bdc6f4"} Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.861478 4893 scope.go:117] "RemoveContainer" containerID="b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.861642 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.865638 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerStarted","Data":"7534b26bb4c70a70d3098f80c9972d4edc0df1425f39e144f1dba1662c2f2182"} Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.914548 4893 scope.go:117] "RemoveContainer" containerID="a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.917116 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.941754 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.952378 4893 scope.go:117] "RemoveContainer" containerID="8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.962376 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:42 crc kubenswrapper[4893]: E0121 07:19:42.963012 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="sg-core" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.963086 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="sg-core" Jan 21 07:19:42 crc kubenswrapper[4893]: E0121 07:19:42.963171 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-notification-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.963245 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-notification-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: E0121 07:19:42.963309 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-central-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964032 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-central-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: E0121 07:19:42.964131 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="proxy-httpd" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964204 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="proxy-httpd" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964590 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="sg-core" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964761 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-notification-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964876 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="ceilometer-central-agent" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.964956 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" containerName="proxy-httpd" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.967021 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.974118 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.974390 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:19:42 crc kubenswrapper[4893]: I0121 07:19:42.996374 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.004989 4893 scope.go:117] "RemoveContainer" containerID="0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.030659 4893 scope.go:117] "RemoveContainer" containerID="b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955" Jan 21 07:19:43 crc kubenswrapper[4893]: E0121 07:19:43.031341 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955\": container with ID starting with b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955 not found: ID does not exist" containerID="b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.031371 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955"} err="failed to get container status \"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955\": rpc error: code = NotFound desc = could not find container \"b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955\": container with ID starting with b2636c805db57dc1616bd7f0e4998d481055e546849369463e5c45e5c2452955 not found: ID does not exist" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.031393 4893 scope.go:117] "RemoveContainer" containerID="a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0" Jan 21 07:19:43 crc kubenswrapper[4893]: E0121 07:19:43.032938 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0\": container with ID starting with a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0 not found: ID does not exist" containerID="a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.032985 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0"} err="failed to get container status \"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0\": rpc error: code = NotFound desc = could not find container \"a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0\": container with ID starting with a4b53ba1ef698330b3a3a242e4dbc06f6254cb586a9447cee6bd2d4575591ab0 not found: ID does not exist" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.033017 4893 scope.go:117] "RemoveContainer" containerID="8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236" Jan 21 07:19:43 crc kubenswrapper[4893]: E0121 07:19:43.033879 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236\": container with ID starting with 8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236 not found: ID does not exist" containerID="8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.033953 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236"} err="failed to get container status \"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236\": rpc error: code = NotFound desc = could not find container \"8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236\": container with ID starting with 8ddc8045de363665f9e39c90c99ed310620f36a8fc59b0db16d99771976e9236 not found: ID does not exist" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.033987 4893 scope.go:117] "RemoveContainer" containerID="0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655" Jan 21 07:19:43 crc kubenswrapper[4893]: E0121 07:19:43.034393 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655\": container with ID starting with 0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655 not found: ID does not exist" containerID="0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.034418 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655"} err="failed to get container status \"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655\": rpc error: code = NotFound desc = could not find container \"0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655\": container with ID starting with 0da5529129e93683fb00eb3d6e40cdcd20d65910585f178ecada4cd6deef6655 not found: ID does not exist" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070057 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070130 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070160 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070196 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070223 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070480 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6qp\" (UniqueName: \"kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.070681 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172335 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172458 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172481 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172501 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172543 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.172610 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6qp\" (UniqueName: \"kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.173985 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.175041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.178154 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.180730 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.185992 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.186836 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.190752 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6qp\" (UniqueName: \"kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp\") pod \"ceilometer-0\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.306940 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.597735 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff32aa7-cc5f-4ff3-a847-0705c9798386" path="/var/lib/kubelet/pods/2ff32aa7-cc5f-4ff3-a847-0705c9798386/volumes" Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.861210 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:43 crc kubenswrapper[4893]: W0121 07:19:43.878793 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5dc41f5_7d4c_41d6_90aa_655987650d00.slice/crio-e1b4fa7cb31a92f17e5351cd443b40f25841f206dc3bf6cca16f861bc0feac6d WatchSource:0}: Error finding container e1b4fa7cb31a92f17e5351cd443b40f25841f206dc3bf6cca16f861bc0feac6d: Status 404 returned error can't find the container with id e1b4fa7cb31a92f17e5351cd443b40f25841f206dc3bf6cca16f861bc0feac6d Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.884931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerStarted","Data":"98a381bf3587dbe6a6decea70f6e5a06994af8d254a33bc9496fa0afb1283c8d"} Jan 21 07:19:43 crc kubenswrapper[4893]: I0121 07:19:43.890161 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerStarted","Data":"cf025bb163e48ab531bc02302eeaab4063f97ae75eabc9949d6dec3d92a30857"} Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.576280 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-znqnp"] Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.578115 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.582361 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cw7tb" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.582397 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.582796 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.588658 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-znqnp"] Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.617799 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.617982 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmm46\" (UniqueName: \"kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.618028 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.618080 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.720273 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.720355 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.720468 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmm46\" (UniqueName: \"kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.720500 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.729409 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.734436 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.749267 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.752467 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmm46\" (UniqueName: \"kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46\") pod \"nova-cell0-conductor-db-sync-znqnp\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.935214 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerStarted","Data":"9af6af2cf0b6fc56ff8fff6040414d4c6371bd930a27e4d908e26718f4910e2e"} Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.943368 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerStarted","Data":"2ed56fea6ed96fd765f43737ab0141951ab632e2d98acd1cb85189751d716818"} Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.945986 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerStarted","Data":"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a"} Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.946030 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerStarted","Data":"e1b4fa7cb31a92f17e5351cd443b40f25841f206dc3bf6cca16f861bc0feac6d"} Jan 21 07:19:44 crc kubenswrapper[4893]: I0121 07:19:44.946238 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.005048 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.0050275 podStartE2EDuration="5.0050275s" podCreationTimestamp="2026-01-21 07:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:44.998988215 +0000 UTC m=+1526.229334117" watchObservedRunningTime="2026-01-21 07:19:45.0050275 +0000 UTC m=+1526.235373402" Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.007106 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.00709545 podStartE2EDuration="5.00709545s" podCreationTimestamp="2026-01-21 07:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:19:44.97645096 +0000 UTC m=+1526.206796862" watchObservedRunningTime="2026-01-21 07:19:45.00709545 +0000 UTC m=+1526.237441352" Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.487027 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-znqnp"] Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.547366 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.964925 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-znqnp" event={"ID":"9456679d-3a66-4b28-b43b-be72ca19d835","Type":"ContainerStarted","Data":"6931375af6c51301ef27d989ec61507a88019af1194f5ed2b9a5849e2582d5cf"} Jan 21 07:19:45 crc kubenswrapper[4893]: I0121 07:19:45.975150 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerStarted","Data":"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77"} Jan 21 07:19:46 crc kubenswrapper[4893]: I0121 07:19:46.988522 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerStarted","Data":"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf"} Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004532 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerStarted","Data":"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2"} Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004955 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004763 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="sg-core" containerID="cri-o://cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf" gracePeriod=30 Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004715 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-central-agent" containerID="cri-o://8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a" gracePeriod=30 Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004847 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-notification-agent" containerID="cri-o://33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77" gracePeriod=30 Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.004787 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="proxy-httpd" containerID="cri-o://94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2" gracePeriod=30 Jan 21 07:19:48 crc kubenswrapper[4893]: I0121 07:19:48.052094 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.28123229 podStartE2EDuration="6.052070803s" podCreationTimestamp="2026-01-21 07:19:42 +0000 UTC" firstStartedPulling="2026-01-21 07:19:43.881240283 +0000 UTC m=+1525.111586185" lastFinishedPulling="2026-01-21 07:19:47.652078796 +0000 UTC m=+1528.882424698" observedRunningTime="2026-01-21 07:19:48.032892156 +0000 UTC m=+1529.263238058" watchObservedRunningTime="2026-01-21 07:19:48.052070803 +0000 UTC m=+1529.282416705" Jan 21 07:19:49 crc kubenswrapper[4893]: I0121 07:19:49.019270 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerID="cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf" exitCode=2 Jan 21 07:19:49 crc kubenswrapper[4893]: I0121 07:19:49.019309 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerID="33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77" exitCode=0 Jan 21 07:19:49 crc kubenswrapper[4893]: I0121 07:19:49.019332 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerDied","Data":"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf"} Jan 21 07:19:49 crc kubenswrapper[4893]: I0121 07:19:49.019362 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerDied","Data":"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77"} Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.317533 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.318133 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.318163 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.318177 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.349601 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.370431 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.380642 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 07:19:51 crc kubenswrapper[4893]: I0121 07:19:51.382463 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:52 crc kubenswrapper[4893]: I0121 07:19:52.068828 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 07:19:52 crc kubenswrapper[4893]: I0121 07:19:52.068882 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 07:19:52 crc kubenswrapper[4893]: I0121 07:19:52.068896 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:52 crc kubenswrapper[4893]: I0121 07:19:52.068908 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.096593 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerID="8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a" exitCode=0 Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.097316 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerDied","Data":"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a"} Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.163604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.163773 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.166801 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.404094 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.404239 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 07:19:54 crc kubenswrapper[4893]: I0121 07:19:54.405291 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 07:19:55 crc kubenswrapper[4893]: I0121 07:19:55.108862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-znqnp" event={"ID":"9456679d-3a66-4b28-b43b-be72ca19d835","Type":"ContainerStarted","Data":"34ec56016ccbe6bf1d879400f1a064708e3bd82739bb36206e2a0d44a5e8618c"} Jan 21 07:19:55 crc kubenswrapper[4893]: I0121 07:19:55.138435 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-znqnp" podStartSLOduration=2.752611644 podStartE2EDuration="11.138410065s" podCreationTimestamp="2026-01-21 07:19:44 +0000 UTC" firstStartedPulling="2026-01-21 07:19:45.499919813 +0000 UTC m=+1526.730265715" lastFinishedPulling="2026-01-21 07:19:53.885718234 +0000 UTC m=+1535.116064136" observedRunningTime="2026-01-21 07:19:55.133811572 +0000 UTC m=+1536.364157484" watchObservedRunningTime="2026-01-21 07:19:55.138410065 +0000 UTC m=+1536.368755967" Jan 21 07:20:08 crc kubenswrapper[4893]: I0121 07:20:08.232608 4893 generic.go:334] "Generic (PLEG): container finished" podID="9456679d-3a66-4b28-b43b-be72ca19d835" containerID="34ec56016ccbe6bf1d879400f1a064708e3bd82739bb36206e2a0d44a5e8618c" exitCode=0 Jan 21 07:20:08 crc kubenswrapper[4893]: I0121 07:20:08.232764 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-znqnp" event={"ID":"9456679d-3a66-4b28-b43b-be72ca19d835","Type":"ContainerDied","Data":"34ec56016ccbe6bf1d879400f1a064708e3bd82739bb36206e2a0d44a5e8618c"} Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.656307 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.813124 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmm46\" (UniqueName: \"kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46\") pod \"9456679d-3a66-4b28-b43b-be72ca19d835\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.813588 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data\") pod \"9456679d-3a66-4b28-b43b-be72ca19d835\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.813644 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle\") pod \"9456679d-3a66-4b28-b43b-be72ca19d835\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.813696 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts\") pod \"9456679d-3a66-4b28-b43b-be72ca19d835\" (UID: \"9456679d-3a66-4b28-b43b-be72ca19d835\") " Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.819259 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts" (OuterVolumeSpecName: "scripts") pod "9456679d-3a66-4b28-b43b-be72ca19d835" (UID: "9456679d-3a66-4b28-b43b-be72ca19d835"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.819275 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46" (OuterVolumeSpecName: "kube-api-access-hmm46") pod "9456679d-3a66-4b28-b43b-be72ca19d835" (UID: "9456679d-3a66-4b28-b43b-be72ca19d835"). InnerVolumeSpecName "kube-api-access-hmm46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.841864 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9456679d-3a66-4b28-b43b-be72ca19d835" (UID: "9456679d-3a66-4b28-b43b-be72ca19d835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.842982 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data" (OuterVolumeSpecName: "config-data") pod "9456679d-3a66-4b28-b43b-be72ca19d835" (UID: "9456679d-3a66-4b28-b43b-be72ca19d835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.916013 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.916052 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.916064 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9456679d-3a66-4b28-b43b-be72ca19d835-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:09 crc kubenswrapper[4893]: I0121 07:20:09.916078 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmm46\" (UniqueName: \"kubernetes.io/projected/9456679d-3a66-4b28-b43b-be72ca19d835-kube-api-access-hmm46\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.254592 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-znqnp" event={"ID":"9456679d-3a66-4b28-b43b-be72ca19d835","Type":"ContainerDied","Data":"6931375af6c51301ef27d989ec61507a88019af1194f5ed2b9a5849e2582d5cf"} Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.254652 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6931375af6c51301ef27d989ec61507a88019af1194f5ed2b9a5849e2582d5cf" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.254745 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-znqnp" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.377331 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:20:10 crc kubenswrapper[4893]: E0121 07:20:10.377912 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9456679d-3a66-4b28-b43b-be72ca19d835" containerName="nova-cell0-conductor-db-sync" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.377938 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9456679d-3a66-4b28-b43b-be72ca19d835" containerName="nova-cell0-conductor-db-sync" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.378187 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9456679d-3a66-4b28-b43b-be72ca19d835" containerName="nova-cell0-conductor-db-sync" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.379365 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.391307 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cw7tb" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.418753 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.392128 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.532081 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6cq\" (UniqueName: \"kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.532224 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.532252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.633928 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.634292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.634440 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl6cq\" (UniqueName: \"kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.639856 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.647629 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.653247 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl6cq\" (UniqueName: \"kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq\") pod \"nova-cell0-conductor-0\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:10 crc kubenswrapper[4893]: I0121 07:20:10.776232 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:11 crc kubenswrapper[4893]: I0121 07:20:11.238760 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:20:11 crc kubenswrapper[4893]: I0121 07:20:11.550089 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04e84192-2873-4f45-855d-d755d99e7946","Type":"ContainerStarted","Data":"6dd06ee1e46536661d1fe85df7ddcff590bb2c5998939704f0bbc825bb78d209"} Jan 21 07:20:12 crc kubenswrapper[4893]: I0121 07:20:12.561718 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04e84192-2873-4f45-855d-d755d99e7946","Type":"ContainerStarted","Data":"2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86"} Jan 21 07:20:12 crc kubenswrapper[4893]: I0121 07:20:12.562022 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:12 crc kubenswrapper[4893]: I0121 07:20:12.605297 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.605272599 podStartE2EDuration="2.605272599s" podCreationTimestamp="2026-01-21 07:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:12.593686152 +0000 UTC m=+1553.824032074" watchObservedRunningTime="2026-01-21 07:20:12.605272599 +0000 UTC m=+1553.835618511" Jan 21 07:20:13 crc kubenswrapper[4893]: I0121 07:20:13.314095 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.428661 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.603555 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604215 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6qp\" (UniqueName: \"kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604332 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604370 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604463 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604533 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.604582 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts\") pod \"b5dc41f5-7d4c-41d6-90aa-655987650d00\" (UID: \"b5dc41f5-7d4c-41d6-90aa-655987650d00\") " Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.605293 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.605642 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.606239 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.610689 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts" (OuterVolumeSpecName: "scripts") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.633604 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp" (OuterVolumeSpecName: "kube-api-access-lp6qp") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "kube-api-access-lp6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.643597 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerID="94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2" exitCode=137 Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.643690 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerDied","Data":"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2"} Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.643735 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5dc41f5-7d4c-41d6-90aa-655987650d00","Type":"ContainerDied","Data":"e1b4fa7cb31a92f17e5351cd443b40f25841f206dc3bf6cca16f861bc0feac6d"} Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.643765 4893 scope.go:117] "RemoveContainer" containerID="94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.644207 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.654322 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.697359 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.708543 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5dc41f5-7d4c-41d6-90aa-655987650d00-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.708579 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.708591 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.708599 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.708608 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6qp\" (UniqueName: \"kubernetes.io/projected/b5dc41f5-7d4c-41d6-90aa-655987650d00-kube-api-access-lp6qp\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.711162 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data" (OuterVolumeSpecName: "config-data") pod "b5dc41f5-7d4c-41d6-90aa-655987650d00" (UID: "b5dc41f5-7d4c-41d6-90aa-655987650d00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.747895 4893 scope.go:117] "RemoveContainer" containerID="cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.778057 4893 scope.go:117] "RemoveContainer" containerID="33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.802882 4893 scope.go:117] "RemoveContainer" containerID="8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.813917 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5dc41f5-7d4c-41d6-90aa-655987650d00-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.830558 4893 scope.go:117] "RemoveContainer" containerID="94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2" Jan 21 07:20:18 crc kubenswrapper[4893]: E0121 07:20:18.831402 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2\": container with ID starting with 94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2 not found: ID does not exist" containerID="94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.831467 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2"} err="failed to get container status \"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2\": rpc error: code = NotFound desc = could not find container \"94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2\": container with ID starting with 94d17702c8b24174b6956b8ac0a2d516845b1fbbea6dd14589739e66b9a55ba2 not found: ID does not exist" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.831505 4893 scope.go:117] "RemoveContainer" containerID="cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf" Jan 21 07:20:18 crc kubenswrapper[4893]: E0121 07:20:18.831974 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf\": container with ID starting with cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf not found: ID does not exist" containerID="cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.832101 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf"} err="failed to get container status \"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf\": rpc error: code = NotFound desc = could not find container \"cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf\": container with ID starting with cdcecdf46aa6835bd2e75eb2f9df5c25eead4f01499ee08a8194428ed5a9c0bf not found: ID does not exist" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.832191 4893 scope.go:117] "RemoveContainer" containerID="33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77" Jan 21 07:20:18 crc kubenswrapper[4893]: E0121 07:20:18.832595 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77\": container with ID starting with 33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77 not found: ID does not exist" containerID="33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.832782 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77"} err="failed to get container status \"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77\": rpc error: code = NotFound desc = could not find container \"33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77\": container with ID starting with 33ee6f0d17cd2b7306ce638e24d0a16b9ba6a02be6fa8d7c81bcfe48d75bea77 not found: ID does not exist" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.832881 4893 scope.go:117] "RemoveContainer" containerID="8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a" Jan 21 07:20:18 crc kubenswrapper[4893]: E0121 07:20:18.833207 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a\": container with ID starting with 8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a not found: ID does not exist" containerID="8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.833240 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a"} err="failed to get container status \"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a\": rpc error: code = NotFound desc = could not find container \"8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a\": container with ID starting with 8524ced5159a454fd7adb18e41ccb2ac2917416d2af6c4c28c2100757e4fe98a not found: ID does not exist" Jan 21 07:20:18 crc kubenswrapper[4893]: I0121 07:20:18.983617 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.007042 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.024979 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:19 crc kubenswrapper[4893]: E0121 07:20:19.025977 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-central-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026012 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-central-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: E0121 07:20:19.026065 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-notification-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026079 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-notification-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: E0121 07:20:19.026098 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="sg-core" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026110 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="sg-core" Jan 21 07:20:19 crc kubenswrapper[4893]: E0121 07:20:19.026135 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="proxy-httpd" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026147 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="proxy-httpd" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026477 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-notification-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026507 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="proxy-httpd" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026530 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="ceilometer-central-agent" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.026574 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" containerName="sg-core" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.030190 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.048643 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.049984 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.056943 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.119635 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.119872 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.119904 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.119924 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmkz\" (UniqueName: \"kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.120051 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.120176 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.120215 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.221618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.221751 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.221788 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.221810 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmkz\" (UniqueName: \"kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.222441 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.222561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.222599 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.222344 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.223518 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.226369 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.227092 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.227527 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.228000 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.240466 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmkz\" (UniqueName: \"kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz\") pod \"ceilometer-0\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.364549 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.617587 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dc41f5-7d4c-41d6-90aa-655987650d00" path="/var/lib/kubelet/pods/b5dc41f5-7d4c-41d6-90aa-655987650d00/volumes" Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.968350 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:19 crc kubenswrapper[4893]: W0121 07:20:19.975547 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b7554d_6f6f_4e6b_92be_d8f58bb89bf5.slice/crio-8d3380d14220b4bfd434222ee79f0f8aaf29a7772d01dc02a7254c0a88d1f8c9 WatchSource:0}: Error finding container 8d3380d14220b4bfd434222ee79f0f8aaf29a7772d01dc02a7254c0a88d1f8c9: Status 404 returned error can't find the container with id 8d3380d14220b4bfd434222ee79f0f8aaf29a7772d01dc02a7254c0a88d1f8c9 Jan 21 07:20:19 crc kubenswrapper[4893]: I0121 07:20:19.979550 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:20:20 crc kubenswrapper[4893]: I0121 07:20:20.673635 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerStarted","Data":"8d3380d14220b4bfd434222ee79f0f8aaf29a7772d01dc02a7254c0a88d1f8c9"} Jan 21 07:20:20 crc kubenswrapper[4893]: I0121 07:20:20.824626 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.326080 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zsq5x"] Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.327330 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.330026 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.330317 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.360750 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zsq5x"] Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.405979 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.406249 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.406358 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.406409 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vsm\" (UniqueName: \"kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.631503 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.631941 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5vsm\" (UniqueName: \"kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.632438 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.633995 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.641688 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.656317 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.692708 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.708767 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.710276 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.715170 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.715932 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerStarted","Data":"7ca76cd9dcf5b52102bdd9bcce2f12eb180b64fc1aa4e74f8334df4abba8fac0"} Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.716068 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5vsm\" (UniqueName: \"kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm\") pod \"nova-cell0-cell-mapping-zsq5x\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.716785 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.727596 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.842162 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.842306 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.842416 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.842459 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cv9v\" (UniqueName: \"kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.945116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.945158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cv9v\" (UniqueName: \"kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.945249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.945312 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.947316 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.950381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.952878 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:21 crc kubenswrapper[4893]: I0121 07:20:21.967837 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cv9v\" (UniqueName: \"kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v\") pod \"nova-api-0\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " pod="openstack/nova-api-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.098166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.249158 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zsq5x"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.461532 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.463764 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.475415 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.495186 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.580169 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.580470 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.580534 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.580601 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.580866 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjnd8\" (UniqueName: \"kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.591302 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.598076 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.605482 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.607329 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.621747 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683048 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683131 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683269 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk96h\" (UniqueName: \"kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683332 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683373 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjnd8\" (UniqueName: \"kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.683467 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.684226 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.692378 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.692452 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.701131 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.717877 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.719282 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.720377 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjnd8\" (UniqueName: \"kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8\") pod \"nova-metadata-0\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.747366 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk96h\" (UniqueName: \"kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790338 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790367 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790392 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndxg\" (UniqueName: \"kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790471 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790505 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790535 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790578 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.790605 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.791983 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.812742 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.822968 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerStarted","Data":"c80ceecd5be292c993bfb07ebd8a8048bf4f56ebcc53ec79b738928f846456bd"} Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.856831 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zsq5x" event={"ID":"dab58e44-b25e-4390-b604-ea1e17365c8e","Type":"ContainerStarted","Data":"61667bbb362d85d042921f2fba16f8d3eb51e936d4bdca20e4e5f49167f97307"} Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.895914 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.895974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896026 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896047 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896107 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896126 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wndxg\" (UniqueName: \"kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896179 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896240 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjh8\" (UniqueName: \"kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896269 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.896687 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.897336 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.898707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.899877 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.900590 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.901752 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.964468 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.965515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.967126 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk96h\" (UniqueName: \"kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h\") pod \"nova-cell1-novncproxy-0\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.971053 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wndxg\" (UniqueName: \"kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg\") pod \"dnsmasq-dns-647df7b8c5-58977\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.996483 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.997802 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjh8\" (UniqueName: \"kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.997980 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:22 crc kubenswrapper[4893]: I0121 07:20:22.998014 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.006917 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.008302 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.020244 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.020347 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjh8\" (UniqueName: \"kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8\") pod \"nova-scheduler-0\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.109325 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.576133 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.711516 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:23 crc kubenswrapper[4893]: W0121 07:20:23.747721 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod772fcdb1_b758_4eb6_be00_7ee690b9badf.slice/crio-ec3be9fce80c4567c3be878cd09351109744c6ee3ee8c08ae9579f58cb0943d6 WatchSource:0}: Error finding container ec3be9fce80c4567c3be878cd09351109744c6ee3ee8c08ae9579f58cb0943d6: Status 404 returned error can't find the container with id ec3be9fce80c4567c3be878cd09351109744c6ee3ee8c08ae9579f58cb0943d6 Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.753067 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mzrmf"] Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.755646 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.764059 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.764202 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.820746 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mzrmf"] Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.917247 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.921637 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zsq5x" event={"ID":"dab58e44-b25e-4390-b604-ea1e17365c8e","Type":"ContainerStarted","Data":"a5f5e6d134a5ef20549d287142a86ef1c8bc06298be5d15521ca276897db55d7"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.924432 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerStarted","Data":"ec3be9fce80c4567c3be878cd09351109744c6ee3ee8c08ae9579f58cb0943d6"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.926689 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-58977" event={"ID":"a17bf972-087d-4a0b-8ee1-63b4606f243e","Type":"ContainerStarted","Data":"9bac1963781f1104c5418be86ac85476e980e02770b6221ace3813315c079109"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.927665 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c3e5429e-adae-4013-ad4e-a6d64b6fb32a","Type":"ContainerStarted","Data":"669a9261833b3ebc5ea1d8e2b878c707a022af68641049a731290b060cff231e"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.928589 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerStarted","Data":"b70d96a03a44a9b96262c36cd3886e588144fb3a29b9e38339f5f3fb954ecdc6"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.930206 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerStarted","Data":"5348ec6f0c6049206f5c6cb9e887be519d0441c83930b968f565f4749c21830e"} Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.941901 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.942057 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.942105 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.942152 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7892\" (UniqueName: \"kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:23 crc kubenswrapper[4893]: I0121 07:20:23.946037 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zsq5x" podStartSLOduration=2.946016667 podStartE2EDuration="2.946016667s" podCreationTimestamp="2026-01-21 07:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:23.942429593 +0000 UTC m=+1565.172775495" watchObservedRunningTime="2026-01-21 07:20:23.946016667 +0000 UTC m=+1565.176362569" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.013157 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:24 crc kubenswrapper[4893]: W0121 07:20:24.020136 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e358cba_9320_4b96_ab77_fe45c0e51a35.slice/crio-4da44e30ded47b4c08a12c74eac476faf38bdb5d160e44820c38a41ac4c6f42d WatchSource:0}: Error finding container 4da44e30ded47b4c08a12c74eac476faf38bdb5d160e44820c38a41ac4c6f42d: Status 404 returned error can't find the container with id 4da44e30ded47b4c08a12c74eac476faf38bdb5d160e44820c38a41ac4c6f42d Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.053413 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.053885 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7892\" (UniqueName: \"kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.053974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.054355 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.059314 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.059314 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.100563 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7892\" (UniqueName: \"kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.101261 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data\") pod \"nova-cell1-conductor-db-sync-mzrmf\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.119854 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.726870 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mzrmf"] Jan 21 07:20:24 crc kubenswrapper[4893]: W0121 07:20:24.741962 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d23b48_e5b2_4154_87a6_1fef70653056.slice/crio-52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380 WatchSource:0}: Error finding container 52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380: Status 404 returned error can't find the container with id 52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380 Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.946750 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e358cba-9320-4b96-ab77-fe45c0e51a35","Type":"ContainerStarted","Data":"4da44e30ded47b4c08a12c74eac476faf38bdb5d160e44820c38a41ac4c6f42d"} Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.956134 4893 generic.go:334] "Generic (PLEG): container finished" podID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerID="c2a1c8ca455e86c1e1069fb1dd4b0791f0f3e2e244e6a01e05ff0967f21b171d" exitCode=0 Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.956294 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-58977" event={"ID":"a17bf972-087d-4a0b-8ee1-63b4606f243e","Type":"ContainerDied","Data":"c2a1c8ca455e86c1e1069fb1dd4b0791f0f3e2e244e6a01e05ff0967f21b171d"} Jan 21 07:20:24 crc kubenswrapper[4893]: I0121 07:20:24.960448 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" event={"ID":"68d23b48-e5b2-4154-87a6-1fef70653056","Type":"ContainerStarted","Data":"52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380"} Jan 21 07:20:25 crc kubenswrapper[4893]: I0121 07:20:25.973855 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerStarted","Data":"6cbd3d5a6b7547b68b24de293cc26fae93e7e6c7710f6aaa94d09d6a1781f4cb"} Jan 21 07:20:25 crc kubenswrapper[4893]: I0121 07:20:25.974215 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:20:25 crc kubenswrapper[4893]: I0121 07:20:25.976780 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-58977" event={"ID":"a17bf972-087d-4a0b-8ee1-63b4606f243e","Type":"ContainerStarted","Data":"88cd396a0d3997efb0abddf87b50caa29eee8322d5e3205e57975bb7925543cd"} Jan 21 07:20:25 crc kubenswrapper[4893]: I0121 07:20:25.978038 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:25 crc kubenswrapper[4893]: I0121 07:20:25.980725 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" event={"ID":"68d23b48-e5b2-4154-87a6-1fef70653056","Type":"ContainerStarted","Data":"42348ea69c139f7e2e81e332a59dfd8cad34e7569e58a9e0d74abfe81e742780"} Jan 21 07:20:26 crc kubenswrapper[4893]: I0121 07:20:26.012964 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.281854066 podStartE2EDuration="8.012938542s" podCreationTimestamp="2026-01-21 07:20:18 +0000 UTC" firstStartedPulling="2026-01-21 07:20:19.979224792 +0000 UTC m=+1561.209570694" lastFinishedPulling="2026-01-21 07:20:24.710309268 +0000 UTC m=+1565.940655170" observedRunningTime="2026-01-21 07:20:26.003578512 +0000 UTC m=+1567.233924424" watchObservedRunningTime="2026-01-21 07:20:26.012938542 +0000 UTC m=+1567.243284444" Jan 21 07:20:26 crc kubenswrapper[4893]: I0121 07:20:26.034175 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" podStartSLOduration=3.034150974 podStartE2EDuration="3.034150974s" podCreationTimestamp="2026-01-21 07:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:26.018485502 +0000 UTC m=+1567.248831404" watchObservedRunningTime="2026-01-21 07:20:26.034150974 +0000 UTC m=+1567.264496876" Jan 21 07:20:26 crc kubenswrapper[4893]: I0121 07:20:26.050493 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-58977" podStartSLOduration=4.050469955 podStartE2EDuration="4.050469955s" podCreationTimestamp="2026-01-21 07:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:26.044331008 +0000 UTC m=+1567.274676910" watchObservedRunningTime="2026-01-21 07:20:26.050469955 +0000 UTC m=+1567.280815857" Jan 21 07:20:26 crc kubenswrapper[4893]: I0121 07:20:26.326362 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:26 crc kubenswrapper[4893]: I0121 07:20:26.342391 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.031054 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerStarted","Data":"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.031604 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerStarted","Data":"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.033819 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e358cba-9320-4b96-ab77-fe45c0e51a35","Type":"ContainerStarted","Data":"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.036963 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerStarted","Data":"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.037022 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerStarted","Data":"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.037227 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-log" containerID="cri-o://a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" gracePeriod=30 Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.037331 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-metadata" containerID="cri-o://db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" gracePeriod=30 Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.051020 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c3e5429e-adae-4013-ad4e-a6d64b6fb32a","Type":"ContainerStarted","Data":"2cc9725ab12cd661bd2f547af67d3fedddf8c443202b9a6f9a62d9b0fcde6149"} Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.051188 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2cc9725ab12cd661bd2f547af67d3fedddf8c443202b9a6f9a62d9b0fcde6149" gracePeriod=30 Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.128030 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.007601741 podStartE2EDuration="8.128009513s" podCreationTimestamp="2026-01-21 07:20:22 +0000 UTC" firstStartedPulling="2026-01-21 07:20:23.772955786 +0000 UTC m=+1565.003301688" lastFinishedPulling="2026-01-21 07:20:28.893363558 +0000 UTC m=+1570.123709460" observedRunningTime="2026-01-21 07:20:30.125862901 +0000 UTC m=+1571.356208803" watchObservedRunningTime="2026-01-21 07:20:30.128009513 +0000 UTC m=+1571.358355425" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.137029 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.227218467 podStartE2EDuration="9.137006213s" podCreationTimestamp="2026-01-21 07:20:21 +0000 UTC" firstStartedPulling="2026-01-21 07:20:22.993192749 +0000 UTC m=+1564.223538651" lastFinishedPulling="2026-01-21 07:20:28.902980495 +0000 UTC m=+1570.133326397" observedRunningTime="2026-01-21 07:20:30.087403122 +0000 UTC m=+1571.317749024" watchObservedRunningTime="2026-01-21 07:20:30.137006213 +0000 UTC m=+1571.367352115" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.180300 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.309406944 podStartE2EDuration="8.18027767s" podCreationTimestamp="2026-01-21 07:20:22 +0000 UTC" firstStartedPulling="2026-01-21 07:20:24.026897809 +0000 UTC m=+1565.257243721" lastFinishedPulling="2026-01-21 07:20:28.897768535 +0000 UTC m=+1570.128114447" observedRunningTime="2026-01-21 07:20:30.176125481 +0000 UTC m=+1571.406471383" watchObservedRunningTime="2026-01-21 07:20:30.18027767 +0000 UTC m=+1571.410623572" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.210084 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.923699261 podStartE2EDuration="8.210051699s" podCreationTimestamp="2026-01-21 07:20:22 +0000 UTC" firstStartedPulling="2026-01-21 07:20:23.598579317 +0000 UTC m=+1564.828925219" lastFinishedPulling="2026-01-21 07:20:28.884931755 +0000 UTC m=+1570.115277657" observedRunningTime="2026-01-21 07:20:30.20001426 +0000 UTC m=+1571.430360162" watchObservedRunningTime="2026-01-21 07:20:30.210051699 +0000 UTC m=+1571.440397621" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.666208 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.744246 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjnd8\" (UniqueName: \"kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8\") pod \"772fcdb1-b758-4eb6-be00-7ee690b9badf\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.744560 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") pod \"772fcdb1-b758-4eb6-be00-7ee690b9badf\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.744616 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs\") pod \"772fcdb1-b758-4eb6-be00-7ee690b9badf\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.744754 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle\") pod \"772fcdb1-b758-4eb6-be00-7ee690b9badf\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.745030 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs" (OuterVolumeSpecName: "logs") pod "772fcdb1-b758-4eb6-be00-7ee690b9badf" (UID: "772fcdb1-b758-4eb6-be00-7ee690b9badf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.745635 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772fcdb1-b758-4eb6-be00-7ee690b9badf-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.761951 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8" (OuterVolumeSpecName: "kube-api-access-qjnd8") pod "772fcdb1-b758-4eb6-be00-7ee690b9badf" (UID: "772fcdb1-b758-4eb6-be00-7ee690b9badf"). InnerVolumeSpecName "kube-api-access-qjnd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:30 crc kubenswrapper[4893]: E0121 07:20:30.776045 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data podName:772fcdb1-b758-4eb6-be00-7ee690b9badf nodeName:}" failed. No retries permitted until 2026-01-21 07:20:31.276013951 +0000 UTC m=+1572.506359853 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data") pod "772fcdb1-b758-4eb6-be00-7ee690b9badf" (UID: "772fcdb1-b758-4eb6-be00-7ee690b9badf") : error deleting /var/lib/kubelet/pods/772fcdb1-b758-4eb6-be00-7ee690b9badf/volume-subpaths: remove /var/lib/kubelet/pods/772fcdb1-b758-4eb6-be00-7ee690b9badf/volume-subpaths: no such file or directory Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.779137 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "772fcdb1-b758-4eb6-be00-7ee690b9badf" (UID: "772fcdb1-b758-4eb6-be00-7ee690b9badf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.847660 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:30 crc kubenswrapper[4893]: I0121 07:20:30.847752 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjnd8\" (UniqueName: \"kubernetes.io/projected/772fcdb1-b758-4eb6-be00-7ee690b9badf-kube-api-access-qjnd8\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.105864 4893 generic.go:334] "Generic (PLEG): container finished" podID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerID="db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" exitCode=0 Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.105892 4893 generic.go:334] "Generic (PLEG): container finished" podID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerID="a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" exitCode=143 Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.105997 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.106795 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerDied","Data":"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735"} Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.106828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerDied","Data":"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc"} Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.106840 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"772fcdb1-b758-4eb6-be00-7ee690b9badf","Type":"ContainerDied","Data":"ec3be9fce80c4567c3be878cd09351109744c6ee3ee8c08ae9579f58cb0943d6"} Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.106855 4893 scope.go:117] "RemoveContainer" containerID="db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.171150 4893 scope.go:117] "RemoveContainer" containerID="a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.203280 4893 scope.go:117] "RemoveContainer" containerID="db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" Jan 21 07:20:31 crc kubenswrapper[4893]: E0121 07:20:31.203885 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735\": container with ID starting with db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735 not found: ID does not exist" containerID="db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.204123 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735"} err="failed to get container status \"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735\": rpc error: code = NotFound desc = could not find container \"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735\": container with ID starting with db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735 not found: ID does not exist" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.204158 4893 scope.go:117] "RemoveContainer" containerID="a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" Jan 21 07:20:31 crc kubenswrapper[4893]: E0121 07:20:31.204728 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc\": container with ID starting with a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc not found: ID does not exist" containerID="a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.204763 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc"} err="failed to get container status \"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc\": rpc error: code = NotFound desc = could not find container \"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc\": container with ID starting with a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc not found: ID does not exist" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.204782 4893 scope.go:117] "RemoveContainer" containerID="db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.205655 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735"} err="failed to get container status \"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735\": rpc error: code = NotFound desc = could not find container \"db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735\": container with ID starting with db5add6b8a946bb39f14bae2dd8748cc82413dcd261937134154817307ada735 not found: ID does not exist" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.205701 4893 scope.go:117] "RemoveContainer" containerID="a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.206176 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc"} err="failed to get container status \"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc\": rpc error: code = NotFound desc = could not find container \"a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc\": container with ID starting with a8b7346d308756596c9c197c7875d82c3e47fb6012a9e20a771e59fb227871fc not found: ID does not exist" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.303404 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") pod \"772fcdb1-b758-4eb6-be00-7ee690b9badf\" (UID: \"772fcdb1-b758-4eb6-be00-7ee690b9badf\") " Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.318866 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data" (OuterVolumeSpecName: "config-data") pod "772fcdb1-b758-4eb6-be00-7ee690b9badf" (UID: "772fcdb1-b758-4eb6-be00-7ee690b9badf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.407083 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772fcdb1-b758-4eb6-be00-7ee690b9badf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.445940 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.454760 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.467318 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:31 crc kubenswrapper[4893]: E0121 07:20:31.468133 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-log" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.468151 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-log" Jan 21 07:20:31 crc kubenswrapper[4893]: E0121 07:20:31.468173 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-metadata" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.468180 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-metadata" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.468359 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-metadata" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.468382 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" containerName="nova-metadata-log" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.469530 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.474081 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.474184 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.483449 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.511978 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.512037 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n89p\" (UniqueName: \"kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.512089 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.512105 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.512126 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.600778 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772fcdb1-b758-4eb6-be00-7ee690b9badf" path="/var/lib/kubelet/pods/772fcdb1-b758-4eb6-be00-7ee690b9badf/volumes" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.612778 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.612832 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.612879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.613097 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.613125 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n89p\" (UniqueName: \"kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.613488 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.621007 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.621454 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.641616 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n89p\" (UniqueName: \"kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.641984 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data\") pod \"nova-metadata-0\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " pod="openstack/nova-metadata-0" Jan 21 07:20:31 crc kubenswrapper[4893]: I0121 07:20:31.787905 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.099113 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.099463 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.121155 4893 generic.go:334] "Generic (PLEG): container finished" podID="dab58e44-b25e-4390-b604-ea1e17365c8e" containerID="a5f5e6d134a5ef20549d287142a86ef1c8bc06298be5d15521ca276897db55d7" exitCode=0 Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.121213 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zsq5x" event={"ID":"dab58e44-b25e-4390-b604-ea1e17365c8e","Type":"ContainerDied","Data":"a5f5e6d134a5ef20549d287142a86ef1c8bc06298be5d15521ca276897db55d7"} Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.272755 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:32 crc kubenswrapper[4893]: I0121 07:20:32.997206 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.008990 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.075430 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.077742 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="dnsmasq-dns" containerID="cri-o://248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca" gracePeriod=10 Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.110695 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.110743 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.149253 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerStarted","Data":"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396"} Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.149588 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerStarted","Data":"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710"} Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.149601 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerStarted","Data":"d44ec83a2cd1f74d27a22baa390ef7d495e014e3b0eefa1829938f66bff13562"} Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.161636 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.185799 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.186092 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.187711 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.187653747 podStartE2EDuration="2.187653747s" podCreationTimestamp="2026-01-21 07:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:33.180096259 +0000 UTC m=+1574.410442161" watchObservedRunningTime="2026-01-21 07:20:33.187653747 +0000 UTC m=+1574.417999649" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.235461 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.611465 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.748462 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.770877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5vsm\" (UniqueName: \"kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm\") pod \"dab58e44-b25e-4390-b604-ea1e17365c8e\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.770960 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts\") pod \"dab58e44-b25e-4390-b604-ea1e17365c8e\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.771013 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data\") pod \"dab58e44-b25e-4390-b604-ea1e17365c8e\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.771139 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle\") pod \"dab58e44-b25e-4390-b604-ea1e17365c8e\" (UID: \"dab58e44-b25e-4390-b604-ea1e17365c8e\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.780217 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm" (OuterVolumeSpecName: "kube-api-access-c5vsm") pod "dab58e44-b25e-4390-b604-ea1e17365c8e" (UID: "dab58e44-b25e-4390-b604-ea1e17365c8e"). InnerVolumeSpecName "kube-api-access-c5vsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.783451 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts" (OuterVolumeSpecName: "scripts") pod "dab58e44-b25e-4390-b604-ea1e17365c8e" (UID: "dab58e44-b25e-4390-b604-ea1e17365c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.806330 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data" (OuterVolumeSpecName: "config-data") pod "dab58e44-b25e-4390-b604-ea1e17365c8e" (UID: "dab58e44-b25e-4390-b604-ea1e17365c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.832861 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab58e44-b25e-4390-b604-ea1e17365c8e" (UID: "dab58e44-b25e-4390-b604-ea1e17365c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877193 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877361 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9wq2\" (UniqueName: \"kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877463 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877534 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877586 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.877618 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config\") pod \"123f1844-92a5-418f-a3df-b9f44943a91d\" (UID: \"123f1844-92a5-418f-a3df-b9f44943a91d\") " Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.878169 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5vsm\" (UniqueName: \"kubernetes.io/projected/dab58e44-b25e-4390-b604-ea1e17365c8e-kube-api-access-c5vsm\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.878193 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.878203 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.878211 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab58e44-b25e-4390-b604-ea1e17365c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.881070 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2" (OuterVolumeSpecName: "kube-api-access-m9wq2") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "kube-api-access-m9wq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.933582 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.940158 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.946053 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config" (OuterVolumeSpecName: "config") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.946415 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.950211 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "123f1844-92a5-418f-a3df-b9f44943a91d" (UID: "123f1844-92a5-418f-a3df-b9f44943a91d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980298 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9wq2\" (UniqueName: \"kubernetes.io/projected/123f1844-92a5-418f-a3df-b9f44943a91d-kube-api-access-m9wq2\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980340 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980353 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980370 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980397 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:33 crc kubenswrapper[4893]: I0121 07:20:33.980407 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/123f1844-92a5-418f-a3df-b9f44943a91d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.160156 4893 generic.go:334] "Generic (PLEG): container finished" podID="123f1844-92a5-418f-a3df-b9f44943a91d" containerID="248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca" exitCode=0 Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.160220 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.160248 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" event={"ID":"123f1844-92a5-418f-a3df-b9f44943a91d","Type":"ContainerDied","Data":"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca"} Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.160724 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-6d8wf" event={"ID":"123f1844-92a5-418f-a3df-b9f44943a91d","Type":"ContainerDied","Data":"de66e951dd122f8d9fccafe5cd702a040146481d8af1fda0f512de4ef942e9b4"} Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.160756 4893 scope.go:117] "RemoveContainer" containerID="248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.162466 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zsq5x" event={"ID":"dab58e44-b25e-4390-b604-ea1e17365c8e","Type":"ContainerDied","Data":"61667bbb362d85d042921f2fba16f8d3eb51e936d4bdca20e4e5f49167f97307"} Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.162740 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61667bbb362d85d042921f2fba16f8d3eb51e936d4bdca20e4e5f49167f97307" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.162497 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zsq5x" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.197344 4893 scope.go:117] "RemoveContainer" containerID="7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.208711 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.218922 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-6d8wf"] Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.220962 4893 scope.go:117] "RemoveContainer" containerID="248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca" Jan 21 07:20:34 crc kubenswrapper[4893]: E0121 07:20:34.221421 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca\": container with ID starting with 248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca not found: ID does not exist" containerID="248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.221451 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca"} err="failed to get container status \"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca\": rpc error: code = NotFound desc = could not find container \"248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca\": container with ID starting with 248da601aa8cfe444b4d2752f6c50dd6c452812be254edffbc0ef15e8da7c9ca not found: ID does not exist" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.221471 4893 scope.go:117] "RemoveContainer" containerID="7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab" Jan 21 07:20:34 crc kubenswrapper[4893]: E0121 07:20:34.221899 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab\": container with ID starting with 7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab not found: ID does not exist" containerID="7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.221930 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab"} err="failed to get container status \"7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab\": rpc error: code = NotFound desc = could not find container \"7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab\": container with ID starting with 7a828cf64d9ab9610f2ae717d35628bfdb7449ce65a200216400aab16bb473ab not found: ID does not exist" Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.339069 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.339591 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-log" containerID="cri-o://3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8" gracePeriod=30 Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.339691 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-api" containerID="cri-o://fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9" gracePeriod=30 Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.357612 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:34 crc kubenswrapper[4893]: I0121 07:20:34.433343 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.184067 4893 generic.go:334] "Generic (PLEG): container finished" podID="68d23b48-e5b2-4154-87a6-1fef70653056" containerID="42348ea69c139f7e2e81e332a59dfd8cad34e7569e58a9e0d74abfe81e742780" exitCode=0 Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.186288 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" event={"ID":"68d23b48-e5b2-4154-87a6-1fef70653056","Type":"ContainerDied","Data":"42348ea69c139f7e2e81e332a59dfd8cad34e7569e58a9e0d74abfe81e742780"} Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.193906 4893 generic.go:334] "Generic (PLEG): container finished" podID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerID="3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8" exitCode=143 Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.194117 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-log" containerID="cri-o://5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" gracePeriod=30 Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.194372 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerDied","Data":"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8"} Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.194502 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7e358cba-9320-4b96-ab77-fe45c0e51a35" containerName="nova-scheduler-scheduler" containerID="cri-o://af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306" gracePeriod=30 Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.194631 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-metadata" containerID="cri-o://158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" gracePeriod=30 Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.595989 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" path="/var/lib/kubelet/pods/123f1844-92a5-418f-a3df-b9f44943a91d/volumes" Jan 21 07:20:35 crc kubenswrapper[4893]: I0121 07:20:35.892034 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023170 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs\") pod \"fd59fb38-8064-4c73-907e-649b4e17b5c5\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023336 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs\") pod \"fd59fb38-8064-4c73-907e-649b4e17b5c5\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023387 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data\") pod \"fd59fb38-8064-4c73-907e-649b4e17b5c5\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023465 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n89p\" (UniqueName: \"kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p\") pod \"fd59fb38-8064-4c73-907e-649b4e17b5c5\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023579 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle\") pod \"fd59fb38-8064-4c73-907e-649b4e17b5c5\" (UID: \"fd59fb38-8064-4c73-907e-649b4e17b5c5\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.023738 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs" (OuterVolumeSpecName: "logs") pod "fd59fb38-8064-4c73-907e-649b4e17b5c5" (UID: "fd59fb38-8064-4c73-907e-649b4e17b5c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.024204 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd59fb38-8064-4c73-907e-649b4e17b5c5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.048003 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p" (OuterVolumeSpecName: "kube-api-access-2n89p") pod "fd59fb38-8064-4c73-907e-649b4e17b5c5" (UID: "fd59fb38-8064-4c73-907e-649b4e17b5c5"). InnerVolumeSpecName "kube-api-access-2n89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.050661 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data" (OuterVolumeSpecName: "config-data") pod "fd59fb38-8064-4c73-907e-649b4e17b5c5" (UID: "fd59fb38-8064-4c73-907e-649b4e17b5c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.063136 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd59fb38-8064-4c73-907e-649b4e17b5c5" (UID: "fd59fb38-8064-4c73-907e-649b4e17b5c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.079504 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fd59fb38-8064-4c73-907e-649b4e17b5c5" (UID: "fd59fb38-8064-4c73-907e-649b4e17b5c5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.125893 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.125937 4893 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.125951 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd59fb38-8064-4c73-907e-649b4e17b5c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.125963 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n89p\" (UniqueName: \"kubernetes.io/projected/fd59fb38-8064-4c73-907e-649b4e17b5c5-kube-api-access-2n89p\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209259 4893 generic.go:334] "Generic (PLEG): container finished" podID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerID="158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" exitCode=0 Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209301 4893 generic.go:334] "Generic (PLEG): container finished" podID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerID="5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" exitCode=143 Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209312 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerDied","Data":"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396"} Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerDied","Data":"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710"} Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209379 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd59fb38-8064-4c73-907e-649b4e17b5c5","Type":"ContainerDied","Data":"d44ec83a2cd1f74d27a22baa390ef7d495e014e3b0eefa1829938f66bff13562"} Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.209399 4893 scope.go:117] "RemoveContainer" containerID="158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.212108 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.250333 4893 scope.go:117] "RemoveContainer" containerID="5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.257552 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.266129 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.288187 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.290500 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-metadata" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.290540 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-metadata" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.290560 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab58e44-b25e-4390-b604-ea1e17365c8e" containerName="nova-manage" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.290568 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab58e44-b25e-4390-b604-ea1e17365c8e" containerName="nova-manage" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.290609 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="dnsmasq-dns" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.290619 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="dnsmasq-dns" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.290628 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="init" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.290635 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="init" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.290657 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-log" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.290664 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-log" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.291040 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="123f1844-92a5-418f-a3df-b9f44943a91d" containerName="dnsmasq-dns" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.291059 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-metadata" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.291094 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" containerName="nova-metadata-log" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.291106 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab58e44-b25e-4390-b604-ea1e17365c8e" containerName="nova-manage" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.294513 4893 scope.go:117] "RemoveContainer" containerID="158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.295269 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396\": container with ID starting with 158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396 not found: ID does not exist" containerID="158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.295311 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396"} err="failed to get container status \"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396\": rpc error: code = NotFound desc = could not find container \"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396\": container with ID starting with 158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396 not found: ID does not exist" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.295332 4893 scope.go:117] "RemoveContainer" containerID="5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" Jan 21 07:20:36 crc kubenswrapper[4893]: E0121 07:20:36.295719 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710\": container with ID starting with 5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710 not found: ID does not exist" containerID="5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.295739 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710"} err="failed to get container status \"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710\": rpc error: code = NotFound desc = could not find container \"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710\": container with ID starting with 5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710 not found: ID does not exist" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.295753 4893 scope.go:117] "RemoveContainer" containerID="158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.296001 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396"} err="failed to get container status \"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396\": rpc error: code = NotFound desc = could not find container \"158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396\": container with ID starting with 158c7f61cbdb790317a4f805a15be2827e0b542162d28c25962012fde2ab0396 not found: ID does not exist" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.296016 4893 scope.go:117] "RemoveContainer" containerID="5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.296237 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710"} err="failed to get container status \"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710\": rpc error: code = NotFound desc = could not find container \"5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710\": container with ID starting with 5cce0d581193a3c830cccb9e54a03a032471fb3400b6b3da0912668d14d7a710 not found: ID does not exist" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.296360 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.299400 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.300886 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.310661 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.433912 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.433989 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.434047 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.434395 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd7c\" (UniqueName: \"kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.434628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.536735 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.536823 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.536881 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.537005 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkd7c\" (UniqueName: \"kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.537086 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.537531 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.541982 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.542851 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.548440 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.557862 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkd7c\" (UniqueName: \"kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c\") pod \"nova-metadata-0\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.619725 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.667277 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.741717 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle\") pod \"68d23b48-e5b2-4154-87a6-1fef70653056\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.742141 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data\") pod \"68d23b48-e5b2-4154-87a6-1fef70653056\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.742191 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts\") pod \"68d23b48-e5b2-4154-87a6-1fef70653056\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.742349 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7892\" (UniqueName: \"kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892\") pod \"68d23b48-e5b2-4154-87a6-1fef70653056\" (UID: \"68d23b48-e5b2-4154-87a6-1fef70653056\") " Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.745635 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts" (OuterVolumeSpecName: "scripts") pod "68d23b48-e5b2-4154-87a6-1fef70653056" (UID: "68d23b48-e5b2-4154-87a6-1fef70653056"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.746922 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892" (OuterVolumeSpecName: "kube-api-access-t7892") pod "68d23b48-e5b2-4154-87a6-1fef70653056" (UID: "68d23b48-e5b2-4154-87a6-1fef70653056"). InnerVolumeSpecName "kube-api-access-t7892". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.770616 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data" (OuterVolumeSpecName: "config-data") pod "68d23b48-e5b2-4154-87a6-1fef70653056" (UID: "68d23b48-e5b2-4154-87a6-1fef70653056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.809771 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68d23b48-e5b2-4154-87a6-1fef70653056" (UID: "68d23b48-e5b2-4154-87a6-1fef70653056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.844909 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7892\" (UniqueName: \"kubernetes.io/projected/68d23b48-e5b2-4154-87a6-1fef70653056-kube-api-access-t7892\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.844950 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.844964 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:36 crc kubenswrapper[4893]: I0121 07:20:36.844980 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d23b48-e5b2-4154-87a6-1fef70653056-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.149968 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:20:37 crc kubenswrapper[4893]: W0121 07:20:37.153985 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod389da817_faf8_4ae5_87c7_baa855b6dbfd.slice/crio-1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf WatchSource:0}: Error finding container 1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf: Status 404 returned error can't find the container with id 1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.226883 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.226931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mzrmf" event={"ID":"68d23b48-e5b2-4154-87a6-1fef70653056","Type":"ContainerDied","Data":"52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380"} Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.227436 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52df5c038547b6e89112bbd328ee4a98f54a4ca3b87df05fddbcedbcd0246380" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.235636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerStarted","Data":"1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf"} Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.290485 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:20:37 crc kubenswrapper[4893]: E0121 07:20:37.291051 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d23b48-e5b2-4154-87a6-1fef70653056" containerName="nova-cell1-conductor-db-sync" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.291070 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d23b48-e5b2-4154-87a6-1fef70653056" containerName="nova-cell1-conductor-db-sync" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.291312 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d23b48-e5b2-4154-87a6-1fef70653056" containerName="nova-cell1-conductor-db-sync" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.292027 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.294825 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.334519 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:20:37 crc kubenswrapper[4893]: E0121 07:20:37.401955 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d23b48_e5b2_4154_87a6_1fef70653056.slice\": RecentStats: unable to find data in memory cache]" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.456217 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.456586 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67q4g\" (UniqueName: \"kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.456623 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.558827 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.558906 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67q4g\" (UniqueName: \"kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.558934 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.565500 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.567969 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.575204 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67q4g\" (UniqueName: \"kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g\") pod \"nova-cell1-conductor-0\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.596313 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd59fb38-8064-4c73-907e-649b4e17b5c5" path="/var/lib/kubelet/pods/fd59fb38-8064-4c73-907e-649b4e17b5c5/volumes" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.679706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.884235 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.969332 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxjh8\" (UniqueName: \"kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8\") pod \"7e358cba-9320-4b96-ab77-fe45c0e51a35\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.969556 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle\") pod \"7e358cba-9320-4b96-ab77-fe45c0e51a35\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.969703 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data\") pod \"7e358cba-9320-4b96-ab77-fe45c0e51a35\" (UID: \"7e358cba-9320-4b96-ab77-fe45c0e51a35\") " Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.974657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8" (OuterVolumeSpecName: "kube-api-access-mxjh8") pod "7e358cba-9320-4b96-ab77-fe45c0e51a35" (UID: "7e358cba-9320-4b96-ab77-fe45c0e51a35"). InnerVolumeSpecName "kube-api-access-mxjh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:37 crc kubenswrapper[4893]: I0121 07:20:37.997846 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e358cba-9320-4b96-ab77-fe45c0e51a35" (UID: "7e358cba-9320-4b96-ab77-fe45c0e51a35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.006838 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data" (OuterVolumeSpecName: "config-data") pod "7e358cba-9320-4b96-ab77-fe45c0e51a35" (UID: "7e358cba-9320-4b96-ab77-fe45c0e51a35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.071566 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.071601 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxjh8\" (UniqueName: \"kubernetes.io/projected/7e358cba-9320-4b96-ab77-fe45c0e51a35-kube-api-access-mxjh8\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.071612 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e358cba-9320-4b96-ab77-fe45c0e51a35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:38 crc kubenswrapper[4893]: W0121 07:20:38.195942 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7722b5d_ba92_4332_93c7_bc3aa9bfdb33.slice/crio-0bf95a6fdcc0ae3f81f550cc775fec3fcee4e15a83cc525de4f80754cc16c083 WatchSource:0}: Error finding container 0bf95a6fdcc0ae3f81f550cc775fec3fcee4e15a83cc525de4f80754cc16c083: Status 404 returned error can't find the container with id 0bf95a6fdcc0ae3f81f550cc775fec3fcee4e15a83cc525de4f80754cc16c083 Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.196046 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.263660 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerStarted","Data":"464e83f2ee6ddc904cb0c2f30a2e9e9ff5e51b54b0dc10182aa679870dcce8ad"} Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.263740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerStarted","Data":"2c2c02392de0b0a37af88f2c56d0d651f8087c548af9e04ca796d743db6bb733"} Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.266125 4893 generic.go:334] "Generic (PLEG): container finished" podID="7e358cba-9320-4b96-ab77-fe45c0e51a35" containerID="af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306" exitCode=0 Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.266214 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e358cba-9320-4b96-ab77-fe45c0e51a35","Type":"ContainerDied","Data":"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306"} Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.266252 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e358cba-9320-4b96-ab77-fe45c0e51a35","Type":"ContainerDied","Data":"4da44e30ded47b4c08a12c74eac476faf38bdb5d160e44820c38a41ac4c6f42d"} Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.266277 4893 scope.go:117] "RemoveContainer" containerID="af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.266467 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.278307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33","Type":"ContainerStarted","Data":"0bf95a6fdcc0ae3f81f550cc775fec3fcee4e15a83cc525de4f80754cc16c083"} Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.308104 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.30805594 podStartE2EDuration="2.30805594s" podCreationTimestamp="2026-01-21 07:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:38.281827744 +0000 UTC m=+1579.512173646" watchObservedRunningTime="2026-01-21 07:20:38.30805594 +0000 UTC m=+1579.538401842" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.312415 4893 scope.go:117] "RemoveContainer" containerID="af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306" Jan 21 07:20:38 crc kubenswrapper[4893]: E0121 07:20:38.313080 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306\": container with ID starting with af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306 not found: ID does not exist" containerID="af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.313270 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306"} err="failed to get container status \"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306\": rpc error: code = NotFound desc = could not find container \"af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306\": container with ID starting with af97bbdf4980add377c23f0d3be3abac5b91bb6c85d3d5886b802405f9636306 not found: ID does not exist" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.328736 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.342816 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.352874 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:38 crc kubenswrapper[4893]: E0121 07:20:38.353430 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e358cba-9320-4b96-ab77-fe45c0e51a35" containerName="nova-scheduler-scheduler" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.353447 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e358cba-9320-4b96-ab77-fe45c0e51a35" containerName="nova-scheduler-scheduler" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.353772 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e358cba-9320-4b96-ab77-fe45c0e51a35" containerName="nova-scheduler-scheduler" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.354579 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.361211 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.365896 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.377864 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.377956 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxrp\" (UniqueName: \"kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.378050 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.480305 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.480384 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxrp\" (UniqueName: \"kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.480446 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.484846 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.486131 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.506285 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxrp\" (UniqueName: \"kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp\") pod \"nova-scheduler-0\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " pod="openstack/nova-scheduler-0" Jan 21 07:20:38 crc kubenswrapper[4893]: I0121 07:20:38.743884 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.159067 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.230114 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle\") pod \"69509f4c-9c48-4b10-8174-52ebccf0c04e\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.230524 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cv9v\" (UniqueName: \"kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v\") pod \"69509f4c-9c48-4b10-8174-52ebccf0c04e\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.230634 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs\") pod \"69509f4c-9c48-4b10-8174-52ebccf0c04e\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.230723 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data\") pod \"69509f4c-9c48-4b10-8174-52ebccf0c04e\" (UID: \"69509f4c-9c48-4b10-8174-52ebccf0c04e\") " Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.231182 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs" (OuterVolumeSpecName: "logs") pod "69509f4c-9c48-4b10-8174-52ebccf0c04e" (UID: "69509f4c-9c48-4b10-8174-52ebccf0c04e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.235364 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v" (OuterVolumeSpecName: "kube-api-access-8cv9v") pod "69509f4c-9c48-4b10-8174-52ebccf0c04e" (UID: "69509f4c-9c48-4b10-8174-52ebccf0c04e"). InnerVolumeSpecName "kube-api-access-8cv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.257198 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69509f4c-9c48-4b10-8174-52ebccf0c04e" (UID: "69509f4c-9c48-4b10-8174-52ebccf0c04e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.259347 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data" (OuterVolumeSpecName: "config-data") pod "69509f4c-9c48-4b10-8174-52ebccf0c04e" (UID: "69509f4c-9c48-4b10-8174-52ebccf0c04e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.290993 4893 generic.go:334] "Generic (PLEG): container finished" podID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerID="fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9" exitCode=0 Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.291069 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.291090 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerDied","Data":"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9"} Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.291132 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69509f4c-9c48-4b10-8174-52ebccf0c04e","Type":"ContainerDied","Data":"b70d96a03a44a9b96262c36cd3886e588144fb3a29b9e38339f5f3fb954ecdc6"} Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.291158 4893 scope.go:117] "RemoveContainer" containerID="fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.296197 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33","Type":"ContainerStarted","Data":"805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a"} Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.328517 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.3284883880000002 podStartE2EDuration="2.328488388s" podCreationTimestamp="2026-01-21 07:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:39.320394604 +0000 UTC m=+1580.550740516" watchObservedRunningTime="2026-01-21 07:20:39.328488388 +0000 UTC m=+1580.558834290" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.332920 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69509f4c-9c48-4b10-8174-52ebccf0c04e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.332956 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.332972 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69509f4c-9c48-4b10-8174-52ebccf0c04e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.332985 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cv9v\" (UniqueName: \"kubernetes.io/projected/69509f4c-9c48-4b10-8174-52ebccf0c04e-kube-api-access-8cv9v\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.433350 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.450865 4893 scope.go:117] "RemoveContainer" containerID="3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.577309 4893 scope.go:117] "RemoveContainer" containerID="fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9" Jan 21 07:20:39 crc kubenswrapper[4893]: E0121 07:20:39.578863 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9\": container with ID starting with fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9 not found: ID does not exist" containerID="fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.578898 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9"} err="failed to get container status \"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9\": rpc error: code = NotFound desc = could not find container \"fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9\": container with ID starting with fedf9af09fdb94c48a24c201392690d01262f86176d501108677b5609b0c26c9 not found: ID does not exist" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.578927 4893 scope.go:117] "RemoveContainer" containerID="3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8" Jan 21 07:20:39 crc kubenswrapper[4893]: E0121 07:20:39.579166 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8\": container with ID starting with 3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8 not found: ID does not exist" containerID="3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.579192 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8"} err="failed to get container status \"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8\": rpc error: code = NotFound desc = could not find container \"3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8\": container with ID starting with 3bdb0ac8222cef2fdf88855748ce81727fd56facc717ae90660539747d2fe8e8 not found: ID does not exist" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.613597 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e358cba-9320-4b96-ab77-fe45c0e51a35" path="/var/lib/kubelet/pods/7e358cba-9320-4b96-ab77-fe45c0e51a35/volumes" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.614380 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.627910 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.650770 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:39 crc kubenswrapper[4893]: E0121 07:20:39.651446 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-log" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.651464 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-log" Jan 21 07:20:39 crc kubenswrapper[4893]: E0121 07:20:39.651475 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-api" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.651483 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-api" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.651758 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-log" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.651776 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" containerName="nova-api-api" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.653064 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.655056 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.677851 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.739711 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.739818 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.739905 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j5k\" (UniqueName: \"kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.740089 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.842720 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.843211 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.843218 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.843550 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j5k\" (UniqueName: \"kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.844184 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.848230 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.854519 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:39 crc kubenswrapper[4893]: I0121 07:20:39.871577 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j5k\" (UniqueName: \"kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k\") pod \"nova-api-0\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " pod="openstack/nova-api-0" Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.036896 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.313275 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa","Type":"ContainerStarted","Data":"2f8d601c80609af7c2a8d37becbf5eaa0cb544aa464accd51d9fd804c09840f5"} Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.313610 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa","Type":"ContainerStarted","Data":"afd7b48e1c6038de9e7c2876f63360cd1ccd1083341ef322075ea12cb8fbeb2f"} Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.316258 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.358770 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.3587240769999998 podStartE2EDuration="2.358724077s" podCreationTimestamp="2026-01-21 07:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:40.348990996 +0000 UTC m=+1581.579336908" watchObservedRunningTime="2026-01-21 07:20:40.358724077 +0000 UTC m=+1581.589069969" Jan 21 07:20:40 crc kubenswrapper[4893]: I0121 07:20:40.470910 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.327535 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerStarted","Data":"866c6d142d2d0df1f3f61a93b1c0e1783f4cdee8ddda60f01a7afadd052e3e39"} Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.327854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerStarted","Data":"377eeb9e7569803a8935b2bc8b2562e7e0a1fc482a4170cd249b18b83cf361b7"} Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.327872 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerStarted","Data":"e4cebc13d3acf53d33d26a2e4ab1f81877c35aefeb6d0104037d3836f40ee9ab"} Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.353352 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.35333133 podStartE2EDuration="2.35333133s" podCreationTimestamp="2026-01-21 07:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:20:41.346443551 +0000 UTC m=+1582.576789463" watchObservedRunningTime="2026-01-21 07:20:41.35333133 +0000 UTC m=+1582.583677232" Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.593724 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69509f4c-9c48-4b10-8174-52ebccf0c04e" path="/var/lib/kubelet/pods/69509f4c-9c48-4b10-8174-52ebccf0c04e/volumes" Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.668864 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 07:20:41 crc kubenswrapper[4893]: I0121 07:20:41.668971 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 07:20:43 crc kubenswrapper[4893]: I0121 07:20:43.745090 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 07:20:46 crc kubenswrapper[4893]: I0121 07:20:46.668396 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 07:20:46 crc kubenswrapper[4893]: I0121 07:20:46.668699 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 07:20:47 crc kubenswrapper[4893]: I0121 07:20:47.686101 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:47 crc kubenswrapper[4893]: I0121 07:20:47.686091 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:47 crc kubenswrapper[4893]: I0121 07:20:47.722046 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 07:20:48 crc kubenswrapper[4893]: I0121 07:20:48.745121 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 07:20:48 crc kubenswrapper[4893]: I0121 07:20:48.798833 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 07:20:49 crc kubenswrapper[4893]: I0121 07:20:49.374694 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 07:20:49 crc kubenswrapper[4893]: I0121 07:20:49.461953 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 07:20:50 crc kubenswrapper[4893]: I0121 07:20:50.038258 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:20:50 crc kubenswrapper[4893]: I0121 07:20:50.038662 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:20:51 crc kubenswrapper[4893]: I0121 07:20:51.121885 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:51 crc kubenswrapper[4893]: I0121 07:20:51.122560 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:20:53 crc kubenswrapper[4893]: I0121 07:20:53.713379 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:53 crc kubenswrapper[4893]: I0121 07:20:53.714122 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" containerName="kube-state-metrics" containerID="cri-o://6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927" gracePeriod=30 Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.192516 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.282851 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwhz4\" (UniqueName: \"kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4\") pod \"299c3f15-e0c0-4017-ac39-e3a2f0764928\" (UID: \"299c3f15-e0c0-4017-ac39-e3a2f0764928\") " Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.334969 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4" (OuterVolumeSpecName: "kube-api-access-zwhz4") pod "299c3f15-e0c0-4017-ac39-e3a2f0764928" (UID: "299c3f15-e0c0-4017-ac39-e3a2f0764928"). InnerVolumeSpecName "kube-api-access-zwhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.387031 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwhz4\" (UniqueName: \"kubernetes.io/projected/299c3f15-e0c0-4017-ac39-e3a2f0764928-kube-api-access-zwhz4\") on node \"crc\" DevicePath \"\"" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.477858 4893 generic.go:334] "Generic (PLEG): container finished" podID="299c3f15-e0c0-4017-ac39-e3a2f0764928" containerID="6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927" exitCode=2 Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.478025 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"299c3f15-e0c0-4017-ac39-e3a2f0764928","Type":"ContainerDied","Data":"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927"} Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.478052 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"299c3f15-e0c0-4017-ac39-e3a2f0764928","Type":"ContainerDied","Data":"4ed8cc223d3c20222fee173a5f5162052c554b055b1d4993dc6ae93021ae96bf"} Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.478070 4893 scope.go:117] "RemoveContainer" containerID="6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.478212 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.514882 4893 scope.go:117] "RemoveContainer" containerID="6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927" Jan 21 07:20:54 crc kubenswrapper[4893]: E0121 07:20:54.515395 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927\": container with ID starting with 6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927 not found: ID does not exist" containerID="6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.515444 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927"} err="failed to get container status \"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927\": rpc error: code = NotFound desc = could not find container \"6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927\": container with ID starting with 6579c89c3316b965374b9cae89561485e46eec1ac63f6fbfc11457548e85a927 not found: ID does not exist" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.541764 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.547659 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.559701 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:54 crc kubenswrapper[4893]: E0121 07:20:54.560448 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" containerName="kube-state-metrics" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.560468 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" containerName="kube-state-metrics" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.560793 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" containerName="kube-state-metrics" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.561637 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.563659 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.563980 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.584420 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.590389 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.590441 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxrt\" (UniqueName: \"kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.590468 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.590825 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.692433 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxrt\" (UniqueName: \"kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.692505 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.692721 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.692921 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.697701 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.697779 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.710605 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.713868 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxrt\" (UniqueName: \"kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt\") pod \"kube-state-metrics-0\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " pod="openstack/kube-state-metrics-0" Jan 21 07:20:54 crc kubenswrapper[4893]: I0121 07:20:54.883425 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:20:55 crc kubenswrapper[4893]: I0121 07:20:55.404290 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:20:55 crc kubenswrapper[4893]: I0121 07:20:55.487824 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a","Type":"ContainerStarted","Data":"463251b62b3d7b213feb1ef0dcb9d0aa66b72528b70d39aac0b56564c010df8f"} Jan 21 07:20:55 crc kubenswrapper[4893]: I0121 07:20:55.594933 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="299c3f15-e0c0-4017-ac39-e3a2f0764928" path="/var/lib/kubelet/pods/299c3f15-e0c0-4017-ac39-e3a2f0764928/volumes" Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.093958 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.095237 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-notification-agent" containerID="cri-o://c80ceecd5be292c993bfb07ebd8a8048bf4f56ebcc53ec79b738928f846456bd" gracePeriod=30 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.095216 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="sg-core" containerID="cri-o://5348ec6f0c6049206f5c6cb9e887be519d0441c83930b968f565f4749c21830e" gracePeriod=30 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.095312 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="proxy-httpd" containerID="cri-o://6cbd3d5a6b7547b68b24de293cc26fae93e7e6c7710f6aaa94d09d6a1781f4cb" gracePeriod=30 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.095213 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-central-agent" containerID="cri-o://7ca76cd9dcf5b52102bdd9bcce2f12eb180b64fc1aa4e74f8334df4abba8fac0" gracePeriod=30 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.576165 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a","Type":"ContainerStarted","Data":"ae932dd21754883ff82584e62cef856bfa6cbc6aee915c47053feb942b516a54"} Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.576314 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.578868 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerID="6cbd3d5a6b7547b68b24de293cc26fae93e7e6c7710f6aaa94d09d6a1781f4cb" exitCode=0 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.578919 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerID="5348ec6f0c6049206f5c6cb9e887be519d0441c83930b968f565f4749c21830e" exitCode=2 Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.578938 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerDied","Data":"6cbd3d5a6b7547b68b24de293cc26fae93e7e6c7710f6aaa94d09d6a1781f4cb"} Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.578960 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerDied","Data":"5348ec6f0c6049206f5c6cb9e887be519d0441c83930b968f565f4749c21830e"} Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.605669 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.228080649 podStartE2EDuration="2.605648637s" podCreationTimestamp="2026-01-21 07:20:54 +0000 UTC" firstStartedPulling="2026-01-21 07:20:55.444521922 +0000 UTC m=+1596.674867824" lastFinishedPulling="2026-01-21 07:20:55.82208989 +0000 UTC m=+1597.052435812" observedRunningTime="2026-01-21 07:20:56.604490463 +0000 UTC m=+1597.834836355" watchObservedRunningTime="2026-01-21 07:20:56.605648637 +0000 UTC m=+1597.835994539" Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.678518 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.716838 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 07:20:56 crc kubenswrapper[4893]: I0121 07:20:56.719361 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 07:20:57 crc kubenswrapper[4893]: I0121 07:20:57.603504 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerID="7ca76cd9dcf5b52102bdd9bcce2f12eb180b64fc1aa4e74f8334df4abba8fac0" exitCode=0 Jan 21 07:20:57 crc kubenswrapper[4893]: I0121 07:20:57.603584 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerDied","Data":"7ca76cd9dcf5b52102bdd9bcce2f12eb180b64fc1aa4e74f8334df4abba8fac0"} Jan 21 07:20:57 crc kubenswrapper[4893]: I0121 07:20:57.616338 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 07:20:58 crc kubenswrapper[4893]: I0121 07:20:58.656987 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:20:58 crc kubenswrapper[4893]: I0121 07:20:58.657304 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.041636 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.042988 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.045893 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.045961 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.640257 4893 generic.go:334] "Generic (PLEG): container finished" podID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" containerID="2cc9725ab12cd661bd2f547af67d3fedddf8c443202b9a6f9a62d9b0fcde6149" exitCode=137 Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.640364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c3e5429e-adae-4013-ad4e-a6d64b6fb32a","Type":"ContainerDied","Data":"2cc9725ab12cd661bd2f547af67d3fedddf8c443202b9a6f9a62d9b0fcde6149"} Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.640636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c3e5429e-adae-4013-ad4e-a6d64b6fb32a","Type":"ContainerDied","Data":"669a9261833b3ebc5ea1d8e2b878c707a022af68641049a731290b060cff231e"} Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.640654 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669a9261833b3ebc5ea1d8e2b878c707a022af68641049a731290b060cff231e" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.640862 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.644323 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.651765 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.768093 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk96h\" (UniqueName: \"kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h\") pod \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.768635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data\") pod \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.768662 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle\") pod \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\" (UID: \"c3e5429e-adae-4013-ad4e-a6d64b6fb32a\") " Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.793134 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h" (OuterVolumeSpecName: "kube-api-access-mk96h") pod "c3e5429e-adae-4013-ad4e-a6d64b6fb32a" (UID: "c3e5429e-adae-4013-ad4e-a6d64b6fb32a"). InnerVolumeSpecName "kube-api-access-mk96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.812880 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3e5429e-adae-4013-ad4e-a6d64b6fb32a" (UID: "c3e5429e-adae-4013-ad4e-a6d64b6fb32a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.832167 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data" (OuterVolumeSpecName: "config-data") pod "c3e5429e-adae-4013-ad4e-a6d64b6fb32a" (UID: "c3e5429e-adae-4013-ad4e-a6d64b6fb32a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.872156 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.872211 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.872227 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk96h\" (UniqueName: \"kubernetes.io/projected/c3e5429e-adae-4013-ad4e-a6d64b6fb32a-kube-api-access-mk96h\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.887876 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:21:00 crc kubenswrapper[4893]: E0121 07:21:00.888555 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.888579 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.888832 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.890020 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:00 crc kubenswrapper[4893]: I0121 07:21:00.909469 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.075516 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.076078 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9pxs\" (UniqueName: \"kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.076116 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.076212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.076262 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.076316 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.177819 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.178974 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.179140 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.179883 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.179980 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.180084 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.180293 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9pxs\" (UniqueName: \"kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.180338 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.181094 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.182284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.188284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.211589 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9pxs\" (UniqueName: \"kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs\") pod \"dnsmasq-dns-fcd6f8f8f-ghmq8\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.508711 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.748045 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerID="c80ceecd5be292c993bfb07ebd8a8048bf4f56ebcc53ec79b738928f846456bd" exitCode=0 Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.749780 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerDied","Data":"c80ceecd5be292c993bfb07ebd8a8048bf4f56ebcc53ec79b738928f846456bd"} Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.749838 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.901256 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.911878 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.930771 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.932578 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.937934 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.943357 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.948647 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 07:21:01 crc kubenswrapper[4893]: I0121 07:21:01.948839 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.042645 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.043069 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.043299 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.043341 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.043386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwc8v\" (UniqueName: \"kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.091660 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154489 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154551 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njmkz\" (UniqueName: \"kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154600 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154772 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.154826 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.155347 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.155430 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.156462 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts\") pod \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\" (UID: \"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5\") " Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157186 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwc8v\" (UniqueName: \"kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157343 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157499 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.157511 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.173437 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts" (OuterVolumeSpecName: "scripts") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.173529 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz" (OuterVolumeSpecName: "kube-api-access-njmkz") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "kube-api-access-njmkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.173694 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.174594 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.181212 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.186863 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwc8v\" (UniqueName: \"kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.188411 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.216124 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.267338 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njmkz\" (UniqueName: \"kubernetes.io/projected/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-kube-api-access-njmkz\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.271050 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.271090 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.282822 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.634822 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.638831 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.672405 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.765858 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data" (OuterVolumeSpecName: "config-data") pod "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" (UID: "02b7554d-6f6f-4e6b-92be-d8f58bb89bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.807193 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02b7554d-6f6f-4e6b-92be-d8f58bb89bf5","Type":"ContainerDied","Data":"8d3380d14220b4bfd434222ee79f0f8aaf29a7772d01dc02a7254c0a88d1f8c9"} Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.807263 4893 scope.go:117] "RemoveContainer" containerID="6cbd3d5a6b7547b68b24de293cc26fae93e7e6c7710f6aaa94d09d6a1781f4cb" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.807337 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.812295 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" event={"ID":"482b048f-92a3-485c-be9b-cc4d4bea116f","Type":"ContainerStarted","Data":"a6803afe47d994749b2500b06ed246d8a11ed740344d3c1936e7c0837e5f3975"} Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.846628 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.915148 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.928930 4893 scope.go:117] "RemoveContainer" containerID="5348ec6f0c6049206f5c6cb9e887be519d0441c83930b968f565f4749c21830e" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.938133 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.967243 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:02 crc kubenswrapper[4893]: E0121 07:21:02.967960 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="proxy-httpd" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.967977 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="proxy-httpd" Jan 21 07:21:02 crc kubenswrapper[4893]: E0121 07:21:02.967995 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-central-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968002 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-central-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: E0121 07:21:02.968014 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-notification-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968020 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-notification-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: E0121 07:21:02.968038 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="sg-core" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968044 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="sg-core" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968267 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="proxy-httpd" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968288 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-notification-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968299 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="sg-core" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.968315 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" containerName="ceilometer-central-agent" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.973534 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.978848 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.979349 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.979844 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.983489 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:02 crc kubenswrapper[4893]: I0121 07:21:02.989091 4893 scope.go:117] "RemoveContainer" containerID="c80ceecd5be292c993bfb07ebd8a8048bf4f56ebcc53ec79b738928f846456bd" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.050254 4893 scope.go:117] "RemoveContainer" containerID="7ca76cd9dcf5b52102bdd9bcce2f12eb180b64fc1aa4e74f8334df4abba8fac0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058098 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058151 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058190 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058286 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058316 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwpf7\" (UniqueName: \"kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058335 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058373 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.058394 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.159884 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160008 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160050 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160171 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160209 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwpf7\" (UniqueName: \"kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160279 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160300 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.160820 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.163454 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.169026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.169294 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.169442 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.171496 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.175306 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.181266 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwpf7\" (UniqueName: \"kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7\") pod \"ceilometer-0\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: W0121 07:21:03.309734 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a1a3b4_4137_4a6e_b8d3_20518f38a2d6.slice/crio-23b340696cbcd0f8c8081aa9b941420d9031aee23f3e1a7cab95827eb24b881b WatchSource:0}: Error finding container 23b340696cbcd0f8c8081aa9b941420d9031aee23f3e1a7cab95827eb24b881b: Status 404 returned error can't find the container with id 23b340696cbcd0f8c8081aa9b941420d9031aee23f3e1a7cab95827eb24b881b Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.311518 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.349661 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.599541 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b7554d-6f6f-4e6b-92be-d8f58bb89bf5" path="/var/lib/kubelet/pods/02b7554d-6f6f-4e6b-92be-d8f58bb89bf5/volumes" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.601482 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e5429e-adae-4013-ad4e-a6d64b6fb32a" path="/var/lib/kubelet/pods/c3e5429e-adae-4013-ad4e-a6d64b6fb32a/volumes" Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.938153 4893 generic.go:334] "Generic (PLEG): container finished" podID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerID="694ad6afb1b9ec0a80b08162277bfaec0d3f9842ddae8e178a429d664a54ec5c" exitCode=0 Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.938238 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" event={"ID":"482b048f-92a3-485c-be9b-cc4d4bea116f","Type":"ContainerDied","Data":"694ad6afb1b9ec0a80b08162277bfaec0d3f9842ddae8e178a429d664a54ec5c"} Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.953361 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6","Type":"ContainerStarted","Data":"b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480"} Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.953447 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6","Type":"ContainerStarted","Data":"23b340696cbcd0f8c8081aa9b941420d9031aee23f3e1a7cab95827eb24b881b"} Jan 21 07:21:03 crc kubenswrapper[4893]: I0121 07:21:03.983414 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.983391217 podStartE2EDuration="2.983391217s" podCreationTimestamp="2026-01-21 07:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:03.979928987 +0000 UTC m=+1605.210274899" watchObservedRunningTime="2026-01-21 07:21:03.983391217 +0000 UTC m=+1605.213737119" Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.009964 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.834307 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.835643 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-api" containerID="cri-o://866c6d142d2d0df1f3f61a93b1c0e1783f4cdee8ddda60f01a7afadd052e3e39" gracePeriod=30 Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.835163 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-log" containerID="cri-o://377eeb9e7569803a8935b2bc8b2562e7e0a1fc482a4170cd249b18b83cf361b7" gracePeriod=30 Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.893824 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.980267 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" event={"ID":"482b048f-92a3-485c-be9b-cc4d4bea116f","Type":"ContainerStarted","Data":"818e24da18a15406b68a06d7381ee70499ab296ff97acccc455e382c7291d203"} Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.981854 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.998173 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerStarted","Data":"c2c1fa9684ddfad92da8bd751e66f4a4c6411c6620f20ca6ef2053ff1ac68c09"} Jan 21 07:21:04 crc kubenswrapper[4893]: I0121 07:21:04.998219 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerStarted","Data":"f5fb6c816865087c57f8c19081fc615910becfa071ccd0cd83f6a46dd8dc4f38"} Jan 21 07:21:05 crc kubenswrapper[4893]: I0121 07:21:05.022179 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" podStartSLOduration=5.022152892 podStartE2EDuration="5.022152892s" podCreationTimestamp="2026-01-21 07:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:05.017956571 +0000 UTC m=+1606.248302493" watchObservedRunningTime="2026-01-21 07:21:05.022152892 +0000 UTC m=+1606.252498794" Jan 21 07:21:05 crc kubenswrapper[4893]: I0121 07:21:05.064471 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.052308 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.055030 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.086489 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.094477 4893 generic.go:334] "Generic (PLEG): container finished" podID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerID="377eeb9e7569803a8935b2bc8b2562e7e0a1fc482a4170cd249b18b83cf361b7" exitCode=143 Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.094532 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerDied","Data":"377eeb9e7569803a8935b2bc8b2562e7e0a1fc482a4170cd249b18b83cf361b7"} Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.101935 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.102233 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlbhf\" (UniqueName: \"kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.102413 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.206051 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlbhf\" (UniqueName: \"kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.206410 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.206479 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.207080 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.207090 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.226381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlbhf\" (UniqueName: \"kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf\") pod \"redhat-marketplace-njscl\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:06 crc kubenswrapper[4893]: I0121 07:21:06.417688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:07 crc kubenswrapper[4893]: I0121 07:21:07.106761 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerStarted","Data":"0382d3cfc2505ae8e1654d6f761d0989b129cbb63ba802e46f892ad0f4c4e827"} Jan 21 07:21:07 crc kubenswrapper[4893]: I0121 07:21:07.215278 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:07 crc kubenswrapper[4893]: I0121 07:21:07.283890 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.183631 4893 generic.go:334] "Generic (PLEG): container finished" podID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerID="866c6d142d2d0df1f3f61a93b1c0e1783f4cdee8ddda60f01a7afadd052e3e39" exitCode=0 Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.185196 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerDied","Data":"866c6d142d2d0df1f3f61a93b1c0e1783f4cdee8ddda60f01a7afadd052e3e39"} Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.187199 4893 generic.go:334] "Generic (PLEG): container finished" podID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerID="aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c" exitCode=0 Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.187259 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerDied","Data":"aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c"} Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.187285 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerStarted","Data":"6a94d86bba40938d19b3b1a5161995c14e62589577f427b547197bd852b66d94"} Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.216548 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerStarted","Data":"b0569b6f4d2150ce80c1e971ae3cdd3ec83a38df9e1e00573e52007e9f01bc86"} Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.642358 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.669892 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data\") pod \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.669994 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9j5k\" (UniqueName: \"kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k\") pod \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.670041 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle\") pod \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.670069 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs\") pod \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\" (UID: \"b9dae931-130e-4eb4-b1ce-c7018b1ac72c\") " Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.671269 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs" (OuterVolumeSpecName: "logs") pod "b9dae931-130e-4eb4-b1ce-c7018b1ac72c" (UID: "b9dae931-130e-4eb4-b1ce-c7018b1ac72c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.681144 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k" (OuterVolumeSpecName: "kube-api-access-t9j5k") pod "b9dae931-130e-4eb4-b1ce-c7018b1ac72c" (UID: "b9dae931-130e-4eb4-b1ce-c7018b1ac72c"). InnerVolumeSpecName "kube-api-access-t9j5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.738746 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9dae931-130e-4eb4-b1ce-c7018b1ac72c" (UID: "b9dae931-130e-4eb4-b1ce-c7018b1ac72c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.741190 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data" (OuterVolumeSpecName: "config-data") pod "b9dae931-130e-4eb4-b1ce-c7018b1ac72c" (UID: "b9dae931-130e-4eb4-b1ce-c7018b1ac72c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.885235 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9j5k\" (UniqueName: \"kubernetes.io/projected/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-kube-api-access-t9j5k\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.885640 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.885734 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:08 crc kubenswrapper[4893]: I0121 07:21:08.885806 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9dae931-130e-4eb4-b1ce-c7018b1ac72c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.229412 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9dae931-130e-4eb4-b1ce-c7018b1ac72c","Type":"ContainerDied","Data":"e4cebc13d3acf53d33d26a2e4ab1f81877c35aefeb6d0104037d3836f40ee9ab"} Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.229684 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.229700 4893 scope.go:117] "RemoveContainer" containerID="866c6d142d2d0df1f3f61a93b1c0e1783f4cdee8ddda60f01a7afadd052e3e39" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.267603 4893 scope.go:117] "RemoveContainer" containerID="377eeb9e7569803a8935b2bc8b2562e7e0a1fc482a4170cd249b18b83cf361b7" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.274598 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.293187 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.304623 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:09 crc kubenswrapper[4893]: E0121 07:21:09.305101 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-log" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.305120 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-log" Jan 21 07:21:09 crc kubenswrapper[4893]: E0121 07:21:09.305155 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-api" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.305161 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-api" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.305370 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-api" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.305387 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" containerName="nova-api-log" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.306489 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.309522 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.309898 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.310199 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.318812 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504064 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504139 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504348 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504461 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.504543 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcpv\" (UniqueName: \"kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.593338 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9dae931-130e-4eb4-b1ce-c7018b1ac72c" path="/var/lib/kubelet/pods/b9dae931-130e-4eb4-b1ce-c7018b1ac72c/volumes" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607722 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607786 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607809 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.607908 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcpv\" (UniqueName: \"kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.609006 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.613649 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.614216 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.614446 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.615564 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.643237 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcpv\" (UniqueName: \"kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv\") pod \"nova-api-0\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " pod="openstack/nova-api-0" Jan 21 07:21:09 crc kubenswrapper[4893]: I0121 07:21:09.660225 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.242714 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerStarted","Data":"c4d296baa041afa903d7af72874926391658fc3f00397ac48154c51017c0d9a5"} Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.243095 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.242819 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-central-agent" containerID="cri-o://c2c1fa9684ddfad92da8bd751e66f4a4c6411c6620f20ca6ef2053ff1ac68c09" gracePeriod=30 Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.242965 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-notification-agent" containerID="cri-o://0382d3cfc2505ae8e1654d6f761d0989b129cbb63ba802e46f892ad0f4c4e827" gracePeriod=30 Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.242977 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="sg-core" containerID="cri-o://b0569b6f4d2150ce80c1e971ae3cdd3ec83a38df9e1e00573e52007e9f01bc86" gracePeriod=30 Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.242869 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="proxy-httpd" containerID="cri-o://c4d296baa041afa903d7af72874926391658fc3f00397ac48154c51017c0d9a5" gracePeriod=30 Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.259656 4893 generic.go:334] "Generic (PLEG): container finished" podID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerID="84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52" exitCode=0 Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.259749 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerDied","Data":"84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52"} Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.278220 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.634647866 podStartE2EDuration="8.278187406s" podCreationTimestamp="2026-01-21 07:21:02 +0000 UTC" firstStartedPulling="2026-01-21 07:21:04.013961028 +0000 UTC m=+1605.244306930" lastFinishedPulling="2026-01-21 07:21:08.657500568 +0000 UTC m=+1609.887846470" observedRunningTime="2026-01-21 07:21:10.265274874 +0000 UTC m=+1611.495620776" watchObservedRunningTime="2026-01-21 07:21:10.278187406 +0000 UTC m=+1611.508533308" Jan 21 07:21:10 crc kubenswrapper[4893]: I0121 07:21:10.398230 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.574633 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.582009 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerStarted","Data":"d946e1675bc6f5d187d1983f4b22d85530d52143ae89943298f5bc1e4aeed834"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.582057 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerStarted","Data":"8b68dcf0d335b5546ef5cff834318c8f6bbed2ae7bbd6240453d9055a736498b"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.582068 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerStarted","Data":"e173be69fa64b7d592cb7b0c11b4a90ecae476883ae27e1d13e267378bf8e2ef"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.631108 4893 generic.go:334] "Generic (PLEG): container finished" podID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerID="c4d296baa041afa903d7af72874926391658fc3f00397ac48154c51017c0d9a5" exitCode=0 Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.631175 4893 generic.go:334] "Generic (PLEG): container finished" podID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerID="b0569b6f4d2150ce80c1e971ae3cdd3ec83a38df9e1e00573e52007e9f01bc86" exitCode=2 Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.631186 4893 generic.go:334] "Generic (PLEG): container finished" podID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerID="0382d3cfc2505ae8e1654d6f761d0989b129cbb63ba802e46f892ad0f4c4e827" exitCode=0 Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.654014 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.653984462 podStartE2EDuration="2.653984462s" podCreationTimestamp="2026-01-21 07:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:11.631553775 +0000 UTC m=+1612.861899677" watchObservedRunningTime="2026-01-21 07:21:11.653984462 +0000 UTC m=+1612.884330364" Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.663599 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerDied","Data":"c4d296baa041afa903d7af72874926391658fc3f00397ac48154c51017c0d9a5"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.663659 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerDied","Data":"b0569b6f4d2150ce80c1e971ae3cdd3ec83a38df9e1e00573e52007e9f01bc86"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.663691 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerDied","Data":"0382d3cfc2505ae8e1654d6f761d0989b129cbb63ba802e46f892ad0f4c4e827"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.663702 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerStarted","Data":"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8"} Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.695829 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.696102 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-58977" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="dnsmasq-dns" containerID="cri-o://88cd396a0d3997efb0abddf87b50caa29eee8322d5e3205e57975bb7925543cd" gracePeriod=10 Jan 21 07:21:11 crc kubenswrapper[4893]: I0121 07:21:11.698259 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-njscl" podStartSLOduration=4.124920189 podStartE2EDuration="6.698246538s" podCreationTimestamp="2026-01-21 07:21:05 +0000 UTC" firstStartedPulling="2026-01-21 07:21:08.195223928 +0000 UTC m=+1609.425569840" lastFinishedPulling="2026-01-21 07:21:10.768550287 +0000 UTC m=+1611.998896189" observedRunningTime="2026-01-21 07:21:11.690647169 +0000 UTC m=+1612.920993081" watchObservedRunningTime="2026-01-21 07:21:11.698246538 +0000 UTC m=+1612.928592440" Jan 21 07:21:12 crc kubenswrapper[4893]: I0121 07:21:12.347776 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:12 crc kubenswrapper[4893]: I0121 07:21:12.438734 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:12 crc kubenswrapper[4893]: I0121 07:21:12.668627 4893 generic.go:334] "Generic (PLEG): container finished" podID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerID="88cd396a0d3997efb0abddf87b50caa29eee8322d5e3205e57975bb7925543cd" exitCode=0 Jan 21 07:21:12 crc kubenswrapper[4893]: I0121 07:21:12.668705 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-58977" event={"ID":"a17bf972-087d-4a0b-8ee1-63b4606f243e","Type":"ContainerDied","Data":"88cd396a0d3997efb0abddf87b50caa29eee8322d5e3205e57975bb7925543cd"} Jan 21 07:21:12 crc kubenswrapper[4893]: I0121 07:21:12.686171 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.164974 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-njghj"] Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.179226 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.183725 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.183820 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.186352 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-njghj"] Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.281222 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2kj\" (UniqueName: \"kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.281303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.281386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.281500 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.367604 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.383764 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.383899 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.383997 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2kj\" (UniqueName: \"kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.384024 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.394658 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.395163 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.407355 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.407366 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2kj\" (UniqueName: \"kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj\") pod \"nova-cell1-cell-mapping-njghj\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.485383 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.486824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.486903 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.486984 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.487398 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wndxg\" (UniqueName: \"kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.487512 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc\") pod \"a17bf972-087d-4a0b-8ee1-63b4606f243e\" (UID: \"a17bf972-087d-4a0b-8ee1-63b4606f243e\") " Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.493836 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg" (OuterVolumeSpecName: "kube-api-access-wndxg") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "kube-api-access-wndxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.527873 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.579784 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.592417 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.592453 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wndxg\" (UniqueName: \"kubernetes.io/projected/a17bf972-087d-4a0b-8ee1-63b4606f243e-kube-api-access-wndxg\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.626560 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config" (OuterVolumeSpecName: "config") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.632364 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.643600 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.649006 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a17bf972-087d-4a0b-8ee1-63b4606f243e" (UID: "a17bf972-087d-4a0b-8ee1-63b4606f243e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.680453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-58977" event={"ID":"a17bf972-087d-4a0b-8ee1-63b4606f243e","Type":"ContainerDied","Data":"9bac1963781f1104c5418be86ac85476e980e02770b6221ace3813315c079109"} Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.680527 4893 scope.go:117] "RemoveContainer" containerID="88cd396a0d3997efb0abddf87b50caa29eee8322d5e3205e57975bb7925543cd" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.680537 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-58977" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.694114 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.694730 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.694839 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.694907 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a17bf972-087d-4a0b-8ee1-63b4606f243e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.732298 4893 scope.go:117] "RemoveContainer" containerID="c2a1c8ca455e86c1e1069fb1dd4b0791f0f3e2e244e6a01e05ff0967f21b171d" Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.766684 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:21:13 crc kubenswrapper[4893]: I0121 07:21:13.766762 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-58977"] Jan 21 07:21:14 crc kubenswrapper[4893]: I0121 07:21:14.335999 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-njghj"] Jan 21 07:21:14 crc kubenswrapper[4893]: I0121 07:21:14.777710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-njghj" event={"ID":"fea804eb-300b-451e-aa99-99ff7ed06070","Type":"ContainerStarted","Data":"ba990b699b2b60af11cc880e2d45fa9ed408c8bcc6faf3f01ecc966b1119859b"} Jan 21 07:21:15 crc kubenswrapper[4893]: I0121 07:21:15.593093 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" path="/var/lib/kubelet/pods/a17bf972-087d-4a0b-8ee1-63b4606f243e/volumes" Jan 21 07:21:15 crc kubenswrapper[4893]: I0121 07:21:15.812395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-njghj" event={"ID":"fea804eb-300b-451e-aa99-99ff7ed06070","Type":"ContainerStarted","Data":"88db4e29765c55f1a72698cb0b6f216ca1cb80f3ab802906082f781183257c89"} Jan 21 07:21:15 crc kubenswrapper[4893]: I0121 07:21:15.839214 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-njghj" podStartSLOduration=2.839191744 podStartE2EDuration="2.839191744s" podCreationTimestamp="2026-01-21 07:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:15.830601796 +0000 UTC m=+1617.060947698" watchObservedRunningTime="2026-01-21 07:21:15.839191744 +0000 UTC m=+1617.069537646" Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.418274 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.418610 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.470609 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.841256 4893 generic.go:334] "Generic (PLEG): container finished" podID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerID="c2c1fa9684ddfad92da8bd751e66f4a4c6411c6620f20ca6ef2053ff1ac68c09" exitCode=0 Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.842131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerDied","Data":"c2c1fa9684ddfad92da8bd751e66f4a4c6411c6620f20ca6ef2053ff1ac68c09"} Jan 21 07:21:16 crc kubenswrapper[4893]: I0121 07:21:16.894550 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.139059 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.359761 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417583 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417697 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417723 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417771 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417815 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.417984 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.418030 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwpf7\" (UniqueName: \"kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7\") pod \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\" (UID: \"d40a1e3b-d787-49c9-b719-7c204a6e5ec8\") " Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.419158 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.419444 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.425895 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7" (OuterVolumeSpecName: "kube-api-access-kwpf7") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "kube-api-access-kwpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.428405 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts" (OuterVolumeSpecName: "scripts") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.454068 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.484049 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.506440 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.520958 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521012 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwpf7\" (UniqueName: \"kubernetes.io/projected/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-kube-api-access-kwpf7\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521029 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521042 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521056 4893 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521068 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.521080 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.547090 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data" (OuterVolumeSpecName: "config-data") pod "d40a1e3b-d787-49c9-b719-7c204a6e5ec8" (UID: "d40a1e3b-d787-49c9-b719-7c204a6e5ec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.622215 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40a1e3b-d787-49c9-b719-7c204a6e5ec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.958558 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.958747 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d40a1e3b-d787-49c9-b719-7c204a6e5ec8","Type":"ContainerDied","Data":"f5fb6c816865087c57f8c19081fc615910becfa071ccd0cd83f6a46dd8dc4f38"} Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.958810 4893 scope.go:117] "RemoveContainer" containerID="c4d296baa041afa903d7af72874926391658fc3f00397ac48154c51017c0d9a5" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.988784 4893 scope.go:117] "RemoveContainer" containerID="b0569b6f4d2150ce80c1e971ae3cdd3ec83a38df9e1e00573e52007e9f01bc86" Jan 21 07:21:17 crc kubenswrapper[4893]: I0121 07:21:17.997941 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.010125 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-58977" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.017451 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.041588 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043653 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="init" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043694 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="init" Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043725 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="sg-core" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043734 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="sg-core" Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043772 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="dnsmasq-dns" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043782 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="dnsmasq-dns" Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043796 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="proxy-httpd" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043803 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="proxy-httpd" Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043830 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-notification-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043838 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-notification-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: E0121 07:21:18.043861 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-central-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043871 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-central-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.043876 4893 scope.go:117] "RemoveContainer" containerID="0382d3cfc2505ae8e1654d6f761d0989b129cbb63ba802e46f892ad0f4c4e827" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.044313 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="proxy-httpd" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.044337 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17bf972-087d-4a0b-8ee1-63b4606f243e" containerName="dnsmasq-dns" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.044377 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-central-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.044395 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="ceilometer-notification-agent" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.044410 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" containerName="sg-core" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.048252 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.050853 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.054118 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.055737 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.059537 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.099189 4893 scope.go:117] "RemoveContainer" containerID="c2c1fa9684ddfad92da8bd751e66f4a4c6411c6620f20ca6ef2053ff1ac68c09" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.144928 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz5lg\" (UniqueName: \"kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.145121 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.145167 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.145231 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.145256 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.147056 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.147373 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.147482 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249554 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249702 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz5lg\" (UniqueName: \"kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249767 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249803 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.249870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.250002 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.250067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.251038 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.251097 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.255576 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.255582 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.257858 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.267881 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.268394 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.269492 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz5lg\" (UniqueName: \"kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg\") pod \"ceilometer-0\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.392858 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.919327 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.979808 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerStarted","Data":"857ec3e5a04525e198441dbcc5bc0eacf93e717d133a810829ca49fe04c84bc4"} Jan 21 07:21:18 crc kubenswrapper[4893]: I0121 07:21:18.984244 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-njscl" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="registry-server" containerID="cri-o://378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8" gracePeriod=2 Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.630236 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40a1e3b-d787-49c9-b719-7c204a6e5ec8" path="/var/lib/kubelet/pods/d40a1e3b-d787-49c9-b719-7c204a6e5ec8/volumes" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.661921 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.662613 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.852827 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.995874 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlbhf\" (UniqueName: \"kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf\") pod \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.996000 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities\") pod \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.996072 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content\") pod \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\" (UID: \"6d8501b8-ba37-482b-b4b6-3a1190c790e6\") " Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.997184 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities" (OuterVolumeSpecName: "utilities") pod "6d8501b8-ba37-482b-b4b6-3a1190c790e6" (UID: "6d8501b8-ba37-482b-b4b6-3a1190c790e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.998665 4893 generic.go:334] "Generic (PLEG): container finished" podID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerID="378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8" exitCode=0 Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.998810 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njscl" Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.998873 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerDied","Data":"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8"} Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.998907 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njscl" event={"ID":"6d8501b8-ba37-482b-b4b6-3a1190c790e6","Type":"ContainerDied","Data":"6a94d86bba40938d19b3b1a5161995c14e62589577f427b547197bd852b66d94"} Jan 21 07:21:19 crc kubenswrapper[4893]: I0121 07:21:19.998927 4893 scope.go:117] "RemoveContainer" containerID="378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.004073 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf" (OuterVolumeSpecName: "kube-api-access-tlbhf") pod "6d8501b8-ba37-482b-b4b6-3a1190c790e6" (UID: "6d8501b8-ba37-482b-b4b6-3a1190c790e6"). InnerVolumeSpecName "kube-api-access-tlbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.019769 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d8501b8-ba37-482b-b4b6-3a1190c790e6" (UID: "6d8501b8-ba37-482b-b4b6-3a1190c790e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.054586 4893 scope.go:117] "RemoveContainer" containerID="84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.098349 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlbhf\" (UniqueName: \"kubernetes.io/projected/6d8501b8-ba37-482b-b4b6-3a1190c790e6-kube-api-access-tlbhf\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.098381 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.098390 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d8501b8-ba37-482b-b4b6-3a1190c790e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.156565 4893 scope.go:117] "RemoveContainer" containerID="aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.236427 4893 scope.go:117] "RemoveContainer" containerID="378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8" Jan 21 07:21:20 crc kubenswrapper[4893]: E0121 07:21:20.237030 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8\": container with ID starting with 378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8 not found: ID does not exist" containerID="378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.237092 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8"} err="failed to get container status \"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8\": rpc error: code = NotFound desc = could not find container \"378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8\": container with ID starting with 378d8fee20fdd796010e0a44dd1f21783fda9dad206fe777c99b3a82a1e182f8 not found: ID does not exist" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.237133 4893 scope.go:117] "RemoveContainer" containerID="84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52" Jan 21 07:21:20 crc kubenswrapper[4893]: E0121 07:21:20.238193 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52\": container with ID starting with 84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52 not found: ID does not exist" containerID="84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.238242 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52"} err="failed to get container status \"84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52\": rpc error: code = NotFound desc = could not find container \"84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52\": container with ID starting with 84ea2aca9bba883fbc155fa43351c7c674b925d1882ef32a530a5fdba699ad52 not found: ID does not exist" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.238302 4893 scope.go:117] "RemoveContainer" containerID="aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c" Jan 21 07:21:20 crc kubenswrapper[4893]: E0121 07:21:20.238731 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c\": container with ID starting with aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c not found: ID does not exist" containerID="aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.238778 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c"} err="failed to get container status \"aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c\": rpc error: code = NotFound desc = could not find container \"aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c\": container with ID starting with aa60f2af65d9add1148425fc0dffec725fad7887e37355d7311022c73af7531c not found: ID does not exist" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.375439 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.393373 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-njscl"] Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.679949 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:20 crc kubenswrapper[4893]: I0121 07:21:20.680009 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:21 crc kubenswrapper[4893]: I0121 07:21:21.025190 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerStarted","Data":"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a"} Jan 21 07:21:21 crc kubenswrapper[4893]: I0121 07:21:21.025540 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerStarted","Data":"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d"} Jan 21 07:21:21 crc kubenswrapper[4893]: I0121 07:21:21.594440 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" path="/var/lib/kubelet/pods/6d8501b8-ba37-482b-b4b6-3a1190c790e6/volumes" Jan 21 07:21:23 crc kubenswrapper[4893]: I0121 07:21:23.048430 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerStarted","Data":"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60"} Jan 21 07:21:23 crc kubenswrapper[4893]: I0121 07:21:23.050444 4893 generic.go:334] "Generic (PLEG): container finished" podID="fea804eb-300b-451e-aa99-99ff7ed06070" containerID="88db4e29765c55f1a72698cb0b6f216ca1cb80f3ab802906082f781183257c89" exitCode=0 Jan 21 07:21:23 crc kubenswrapper[4893]: I0121 07:21:23.050491 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-njghj" event={"ID":"fea804eb-300b-451e-aa99-99ff7ed06070","Type":"ContainerDied","Data":"88db4e29765c55f1a72698cb0b6f216ca1cb80f3ab802906082f781183257c89"} Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.063217 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerStarted","Data":"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e"} Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.111769 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.371262573 podStartE2EDuration="7.111740038s" podCreationTimestamp="2026-01-21 07:21:17 +0000 UTC" firstStartedPulling="2026-01-21 07:21:18.929837393 +0000 UTC m=+1620.160183295" lastFinishedPulling="2026-01-21 07:21:23.670314858 +0000 UTC m=+1624.900660760" observedRunningTime="2026-01-21 07:21:24.102879573 +0000 UTC m=+1625.333225475" watchObservedRunningTime="2026-01-21 07:21:24.111740038 +0000 UTC m=+1625.342085940" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.463890 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.583064 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data\") pod \"fea804eb-300b-451e-aa99-99ff7ed06070\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.583280 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle\") pod \"fea804eb-300b-451e-aa99-99ff7ed06070\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.583395 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2kj\" (UniqueName: \"kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj\") pod \"fea804eb-300b-451e-aa99-99ff7ed06070\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.583468 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts\") pod \"fea804eb-300b-451e-aa99-99ff7ed06070\" (UID: \"fea804eb-300b-451e-aa99-99ff7ed06070\") " Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.590386 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj" (OuterVolumeSpecName: "kube-api-access-9z2kj") pod "fea804eb-300b-451e-aa99-99ff7ed06070" (UID: "fea804eb-300b-451e-aa99-99ff7ed06070"). InnerVolumeSpecName "kube-api-access-9z2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.591289 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts" (OuterVolumeSpecName: "scripts") pod "fea804eb-300b-451e-aa99-99ff7ed06070" (UID: "fea804eb-300b-451e-aa99-99ff7ed06070"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.618695 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data" (OuterVolumeSpecName: "config-data") pod "fea804eb-300b-451e-aa99-99ff7ed06070" (UID: "fea804eb-300b-451e-aa99-99ff7ed06070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.627748 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fea804eb-300b-451e-aa99-99ff7ed06070" (UID: "fea804eb-300b-451e-aa99-99ff7ed06070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.687690 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.687952 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.688023 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z2kj\" (UniqueName: \"kubernetes.io/projected/fea804eb-300b-451e-aa99-99ff7ed06070-kube-api-access-9z2kj\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:24 crc kubenswrapper[4893]: I0121 07:21:24.688083 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea804eb-300b-451e-aa99-99ff7ed06070-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.079695 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-njghj" event={"ID":"fea804eb-300b-451e-aa99-99ff7ed06070","Type":"ContainerDied","Data":"ba990b699b2b60af11cc880e2d45fa9ed408c8bcc6faf3f01ecc966b1119859b"} Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.079789 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba990b699b2b60af11cc880e2d45fa9ed408c8bcc6faf3f01ecc966b1119859b" Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.079719 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-njghj" Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.083180 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.313247 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.313585 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" containerName="nova-scheduler-scheduler" containerID="cri-o://2f8d601c80609af7c2a8d37becbf5eaa0cb544aa464accd51d9fd804c09840f5" gracePeriod=30 Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.332885 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.333196 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-log" containerID="cri-o://8b68dcf0d335b5546ef5cff834318c8f6bbed2ae7bbd6240453d9055a736498b" gracePeriod=30 Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.333386 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-api" containerID="cri-o://d946e1675bc6f5d187d1983f4b22d85530d52143ae89943298f5bc1e4aeed834" gracePeriod=30 Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.359119 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.359398 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" containerID="cri-o://2c2c02392de0b0a37af88f2c56d0d651f8087c548af9e04ca796d743db6bb733" gracePeriod=30 Jan 21 07:21:25 crc kubenswrapper[4893]: I0121 07:21:25.359595 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" containerID="cri-o://464e83f2ee6ddc904cb0c2f30a2e9e9ff5e51b54b0dc10182aa679870dcce8ad" gracePeriod=30 Jan 21 07:21:26 crc kubenswrapper[4893]: I0121 07:21:26.092033 4893 generic.go:334] "Generic (PLEG): container finished" podID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerID="2c2c02392de0b0a37af88f2c56d0d651f8087c548af9e04ca796d743db6bb733" exitCode=143 Jan 21 07:21:26 crc kubenswrapper[4893]: I0121 07:21:26.092351 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerDied","Data":"2c2c02392de0b0a37af88f2c56d0d651f8087c548af9e04ca796d743db6bb733"} Jan 21 07:21:26 crc kubenswrapper[4893]: I0121 07:21:26.094280 4893 generic.go:334] "Generic (PLEG): container finished" podID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerID="8b68dcf0d335b5546ef5cff834318c8f6bbed2ae7bbd6240453d9055a736498b" exitCode=143 Jan 21 07:21:26 crc kubenswrapper[4893]: I0121 07:21:26.095350 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerDied","Data":"8b68dcf0d335b5546ef5cff834318c8f6bbed2ae7bbd6240453d9055a736498b"} Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.128302 4893 generic.go:334] "Generic (PLEG): container finished" podID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" containerID="2f8d601c80609af7c2a8d37becbf5eaa0cb544aa464accd51d9fd804c09840f5" exitCode=0 Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.128874 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa","Type":"ContainerDied","Data":"2f8d601c80609af7c2a8d37becbf5eaa0cb544aa464accd51d9fd804c09840f5"} Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.432393 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.545854 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsxrp\" (UniqueName: \"kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp\") pod \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.545961 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data\") pod \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.546031 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle\") pod \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\" (UID: \"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa\") " Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.552883 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp" (OuterVolumeSpecName: "kube-api-access-wsxrp") pod "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" (UID: "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa"). InnerVolumeSpecName "kube-api-access-wsxrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.584576 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data" (OuterVolumeSpecName: "config-data") pod "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" (UID: "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.590103 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" (UID: "134a93ab-9a65-43e9-bcf2-6fa5bd4105aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.648049 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.648096 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.648112 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsxrp\" (UniqueName: \"kubernetes.io/projected/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa-kube-api-access-wsxrp\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.650110 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:46996->10.217.0.193:8775: read: connection reset by peer" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.650122 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:46982->10.217.0.193:8775: read: connection reset by peer" Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.656387 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:21:28 crc kubenswrapper[4893]: I0121 07:21:28.656437 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.161943 4893 generic.go:334] "Generic (PLEG): container finished" podID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerID="d946e1675bc6f5d187d1983f4b22d85530d52143ae89943298f5bc1e4aeed834" exitCode=0 Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.162224 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerDied","Data":"d946e1675bc6f5d187d1983f4b22d85530d52143ae89943298f5bc1e4aeed834"} Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.164117 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.164130 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"134a93ab-9a65-43e9-bcf2-6fa5bd4105aa","Type":"ContainerDied","Data":"afd7b48e1c6038de9e7c2876f63360cd1ccd1083341ef322075ea12cb8fbeb2f"} Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.164200 4893 scope.go:117] "RemoveContainer" containerID="2f8d601c80609af7c2a8d37becbf5eaa0cb544aa464accd51d9fd804c09840f5" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.167366 4893 generic.go:334] "Generic (PLEG): container finished" podID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerID="464e83f2ee6ddc904cb0c2f30a2e9e9ff5e51b54b0dc10182aa679870dcce8ad" exitCode=0 Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.167395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerDied","Data":"464e83f2ee6ddc904cb0c2f30a2e9e9ff5e51b54b0dc10182aa679870dcce8ad"} Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.167414 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"389da817-faf8-4ae5-87c7-baa855b6dbfd","Type":"ContainerDied","Data":"1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf"} Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.167425 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f82b18367305ca26cc5adc126d7f19bf7fcfef62413bb1760120e683d38facf" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.185559 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.267926 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.270629 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs\") pod \"389da817-faf8-4ae5-87c7-baa855b6dbfd\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.270818 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs\") pod \"389da817-faf8-4ae5-87c7-baa855b6dbfd\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.270921 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkd7c\" (UniqueName: \"kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c\") pod \"389da817-faf8-4ae5-87c7-baa855b6dbfd\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.270953 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle\") pod \"389da817-faf8-4ae5-87c7-baa855b6dbfd\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.270998 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data\") pod \"389da817-faf8-4ae5-87c7-baa855b6dbfd\" (UID: \"389da817-faf8-4ae5-87c7-baa855b6dbfd\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.281376 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c" (OuterVolumeSpecName: "kube-api-access-zkd7c") pod "389da817-faf8-4ae5-87c7-baa855b6dbfd" (UID: "389da817-faf8-4ae5-87c7-baa855b6dbfd"). InnerVolumeSpecName "kube-api-access-zkd7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.281846 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs" (OuterVolumeSpecName: "logs") pod "389da817-faf8-4ae5-87c7-baa855b6dbfd" (UID: "389da817-faf8-4ae5-87c7-baa855b6dbfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.294135 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.306939 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307656 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="extract-content" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307696 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="extract-content" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307725 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="registry-server" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307732 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="registry-server" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307748 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" containerName="nova-scheduler-scheduler" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307755 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" containerName="nova-scheduler-scheduler" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307777 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307783 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307812 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="extract-utilities" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307819 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="extract-utilities" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307837 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea804eb-300b-451e-aa99-99ff7ed06070" containerName="nova-manage" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307845 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea804eb-300b-451e-aa99-99ff7ed06070" containerName="nova-manage" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.307864 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.307872 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.308289 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-metadata" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.308323 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8501b8-ba37-482b-b4b6-3a1190c790e6" containerName="registry-server" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.308336 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" containerName="nova-metadata-log" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.308369 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" containerName="nova-scheduler-scheduler" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.308381 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea804eb-300b-451e-aa99-99ff7ed06070" containerName="nova-manage" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.309390 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.313733 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.316419 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "389da817-faf8-4ae5-87c7-baa855b6dbfd" (UID: "389da817-faf8-4ae5-87c7-baa855b6dbfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.328546 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data" (OuterVolumeSpecName: "config-data") pod "389da817-faf8-4ae5-87c7-baa855b6dbfd" (UID: "389da817-faf8-4ae5-87c7-baa855b6dbfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.347047 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.373775 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkd7c\" (UniqueName: \"kubernetes.io/projected/389da817-faf8-4ae5-87c7-baa855b6dbfd-kube-api-access-zkd7c\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.373806 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.373819 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.373830 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/389da817-faf8-4ae5-87c7-baa855b6dbfd-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.400189 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.405342 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "389da817-faf8-4ae5-87c7-baa855b6dbfd" (UID: "389da817-faf8-4ae5-87c7-baa855b6dbfd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: E0121 07:21:29.435628 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod134a93ab_9a65_43e9_bcf2_6fa5bd4105aa.slice\": RecentStats: unable to find data in memory cache]" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475040 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475397 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475447 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpcpv\" (UniqueName: \"kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475550 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475573 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475686 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs\") pod \"c0ade42b-b725-47f4-843d-7d71669c77b7\" (UID: \"c0ade42b-b725-47f4-843d-7d71669c77b7\") " Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475915 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475947 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8m2v\" (UniqueName: \"kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.475981 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.476154 4893 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/389da817-faf8-4ae5-87c7-baa855b6dbfd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.476566 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs" (OuterVolumeSpecName: "logs") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.479334 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv" (OuterVolumeSpecName: "kube-api-access-xpcpv") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "kube-api-access-xpcpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.499377 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.504984 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data" (OuterVolumeSpecName: "config-data") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.524812 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.528344 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c0ade42b-b725-47f4-843d-7d71669c77b7" (UID: "c0ade42b-b725-47f4-843d-7d71669c77b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578360 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578415 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8m2v\" (UniqueName: \"kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578449 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578626 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ade42b-b725-47f4-843d-7d71669c77b7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578639 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578653 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578665 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578691 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ade42b-b725-47f4-843d-7d71669c77b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.578699 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpcpv\" (UniqueName: \"kubernetes.io/projected/c0ade42b-b725-47f4-843d-7d71669c77b7-kube-api-access-xpcpv\") on node \"crc\" DevicePath \"\"" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.581765 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.582796 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.594154 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134a93ab-9a65-43e9-bcf2-6fa5bd4105aa" path="/var/lib/kubelet/pods/134a93ab-9a65-43e9-bcf2-6fa5bd4105aa/volumes" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.601277 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8m2v\" (UniqueName: \"kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v\") pod \"nova-scheduler-0\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " pod="openstack/nova-scheduler-0" Jan 21 07:21:29 crc kubenswrapper[4893]: I0121 07:21:29.696836 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.178874 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0ade42b-b725-47f4-843d-7d71669c77b7","Type":"ContainerDied","Data":"e173be69fa64b7d592cb7b0c11b4a90ecae476883ae27e1d13e267378bf8e2ef"} Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.179351 4893 scope.go:117] "RemoveContainer" containerID="d946e1675bc6f5d187d1983f4b22d85530d52143ae89943298f5bc1e4aeed834" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.178946 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.180879 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.217017 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.225003 4893 scope.go:117] "RemoveContainer" containerID="8b68dcf0d335b5546ef5cff834318c8f6bbed2ae7bbd6240453d9055a736498b" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.236837 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.253625 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.289303 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.298336 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: E0121 07:21:30.298954 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-api" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.298976 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-api" Jan 21 07:21:30 crc kubenswrapper[4893]: E0121 07:21:30.298992 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-log" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.298999 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-log" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.299228 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-log" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.299270 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" containerName="nova-api-api" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.302049 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.305072 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.306205 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.327458 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: W0121 07:21:30.330203 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8d3670_41c0_4649_8a2f_38b090638cac.slice/crio-26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473 WatchSource:0}: Error finding container 26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473: Status 404 returned error can't find the container with id 26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473 Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.337886 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.339778 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.342587 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.342839 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.342988 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.346431 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.361956 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.404849 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405193 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405230 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405332 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405438 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wds\" (UniqueName: \"kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405515 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vhr\" (UniqueName: \"kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405539 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405584 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.405614 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507050 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507097 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507174 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wds\" (UniqueName: \"kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507257 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6vhr\" (UniqueName: \"kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507284 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507349 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507402 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507424 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507446 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.507493 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.508830 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.509127 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.512237 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.512257 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.512561 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.513211 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.513691 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.520290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.523234 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.529949 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wds\" (UniqueName: \"kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds\") pod \"nova-metadata-0\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.530579 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6vhr\" (UniqueName: \"kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr\") pod \"nova-api-0\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " pod="openstack/nova-api-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.622643 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:21:30 crc kubenswrapper[4893]: I0121 07:21:30.802475 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.136476 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.191187 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerStarted","Data":"c270105d5138cf1d34989ae06d55c397a17492ce79c47ca41c8b4386880d4996"} Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.193441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c8d3670-41c0-4649-8a2f-38b090638cac","Type":"ContainerStarted","Data":"2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51"} Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.193470 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c8d3670-41c0-4649-8a2f-38b090638cac","Type":"ContainerStarted","Data":"26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473"} Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.216593 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.216539497 podStartE2EDuration="2.216539497s" podCreationTimestamp="2026-01-21 07:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:31.210448381 +0000 UTC m=+1632.440794293" watchObservedRunningTime="2026-01-21 07:21:31.216539497 +0000 UTC m=+1632.446885409" Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.298886 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.591091 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="389da817-faf8-4ae5-87c7-baa855b6dbfd" path="/var/lib/kubelet/pods/389da817-faf8-4ae5-87c7-baa855b6dbfd/volumes" Jan 21 07:21:31 crc kubenswrapper[4893]: I0121 07:21:31.592191 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ade42b-b725-47f4-843d-7d71669c77b7" path="/var/lib/kubelet/pods/c0ade42b-b725-47f4-843d-7d71669c77b7/volumes" Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.203462 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerStarted","Data":"31c581f13004a2f7815d44365eb034baed7c66ac483f7fa7c22317077d696c9a"} Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.203807 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerStarted","Data":"049e37b8b5a580dac1053dd24aa63d0528098d200b566e60ab78bd88f14de585"} Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.203825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerStarted","Data":"b611d728318d141b8d4e5de7bcb7f46174303f0ac1795abd1a6c6a1a4d220908"} Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.208784 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerStarted","Data":"357413c2169766654a3f84ffb51b7dca2610fa69e2e67bc3239b3491d881ff66"} Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.208872 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerStarted","Data":"e5a2c7e9e8afbc7e00e8fde7ad874e7b56174cc0c7a9869b437318952fda7126"} Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.238260 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.23823579 podStartE2EDuration="2.23823579s" podCreationTimestamp="2026-01-21 07:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:32.22712291 +0000 UTC m=+1633.457468822" watchObservedRunningTime="2026-01-21 07:21:32.23823579 +0000 UTC m=+1633.468581692" Jan 21 07:21:32 crc kubenswrapper[4893]: I0121 07:21:32.259446 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.259426172 podStartE2EDuration="2.259426172s" podCreationTimestamp="2026-01-21 07:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:21:32.248696382 +0000 UTC m=+1633.479042304" watchObservedRunningTime="2026-01-21 07:21:32.259426172 +0000 UTC m=+1633.489772064" Jan 21 07:21:34 crc kubenswrapper[4893]: I0121 07:21:34.697401 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 07:21:35 crc kubenswrapper[4893]: I0121 07:21:35.623293 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 07:21:35 crc kubenswrapper[4893]: I0121 07:21:35.625811 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 07:21:39 crc kubenswrapper[4893]: I0121 07:21:39.698090 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 07:21:39 crc kubenswrapper[4893]: I0121 07:21:39.737455 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 07:21:40 crc kubenswrapper[4893]: I0121 07:21:40.498248 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 07:21:40 crc kubenswrapper[4893]: I0121 07:21:40.623359 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 07:21:40 crc kubenswrapper[4893]: I0121 07:21:40.623439 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 07:21:40 crc kubenswrapper[4893]: I0121 07:21:40.804134 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:21:40 crc kubenswrapper[4893]: I0121 07:21:40.805277 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 07:21:41 crc kubenswrapper[4893]: I0121 07:21:41.636041 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:41 crc kubenswrapper[4893]: I0121 07:21:41.636049 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:41 crc kubenswrapper[4893]: I0121 07:21:41.818858 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:41 crc kubenswrapper[4893]: I0121 07:21:41.818867 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 07:21:48 crc kubenswrapper[4893]: I0121 07:21:48.477149 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.630601 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.632960 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.637416 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.811336 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.812563 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.813889 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 07:21:50 crc kubenswrapper[4893]: I0121 07:21:50.821113 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 07:21:51 crc kubenswrapper[4893]: I0121 07:21:51.904645 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 07:21:51 crc kubenswrapper[4893]: I0121 07:21:51.910464 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 07:21:51 crc kubenswrapper[4893]: I0121 07:21:51.910524 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.657584 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.658622 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.658794 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.660426 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.660589 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" gracePeriod=600 Jan 21 07:21:58 crc kubenswrapper[4893]: E0121 07:21:58.935990 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.990926 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" exitCode=0 Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.991000 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f"} Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.991231 4893 scope.go:117] "RemoveContainer" containerID="65e775d6c7fb2e1ccc5654cabb2b28ac1217a7b4dff2b28de89fd7fcc1b71b03" Jan 21 07:21:58 crc kubenswrapper[4893]: I0121 07:21:58.993582 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:21:58 crc kubenswrapper[4893]: E0121 07:21:58.994898 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:22:09 crc kubenswrapper[4893]: I0121 07:22:09.603633 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:22:09 crc kubenswrapper[4893]: E0121 07:22:09.606468 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:22:10 crc kubenswrapper[4893]: I0121 07:22:10.967143 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:10 crc kubenswrapper[4893]: I0121 07:22:10.970216 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:10 crc kubenswrapper[4893]: I0121 07:22:10.984768 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.158650 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbqk\" (UniqueName: \"kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.158935 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.159021 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.260609 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbqk\" (UniqueName: \"kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.260800 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.260846 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.261352 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.261498 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.288912 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbqk\" (UniqueName: \"kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk\") pod \"certified-operators-2hvzv\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.296888 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:11 crc kubenswrapper[4893]: I0121 07:22:11.812041 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:12 crc kubenswrapper[4893]: I0121 07:22:12.161185 4893 generic.go:334] "Generic (PLEG): container finished" podID="e544fa30-133c-4728-a8c5-99084bcb4367" containerID="54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8" exitCode=0 Jan 21 07:22:12 crc kubenswrapper[4893]: I0121 07:22:12.161310 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerDied","Data":"54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8"} Jan 21 07:22:12 crc kubenswrapper[4893]: I0121 07:22:12.161556 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerStarted","Data":"d4aba9439ec65e3f7bb6752d2af7b9ddeef8ba0ff178ef3932bb7a0d47cb3aa8"} Jan 21 07:22:13 crc kubenswrapper[4893]: I0121 07:22:13.181063 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerStarted","Data":"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f"} Jan 21 07:22:14 crc kubenswrapper[4893]: I0121 07:22:14.193847 4893 generic.go:334] "Generic (PLEG): container finished" podID="e544fa30-133c-4728-a8c5-99084bcb4367" containerID="b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f" exitCode=0 Jan 21 07:22:14 crc kubenswrapper[4893]: I0121 07:22:14.194014 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerDied","Data":"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f"} Jan 21 07:22:15 crc kubenswrapper[4893]: I0121 07:22:15.207529 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerStarted","Data":"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d"} Jan 21 07:22:15 crc kubenswrapper[4893]: I0121 07:22:15.246581 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2hvzv" podStartSLOduration=2.799441247 podStartE2EDuration="5.246555667s" podCreationTimestamp="2026-01-21 07:22:10 +0000 UTC" firstStartedPulling="2026-01-21 07:22:12.16288432 +0000 UTC m=+1673.393230232" lastFinishedPulling="2026-01-21 07:22:14.60999872 +0000 UTC m=+1675.840344652" observedRunningTime="2026-01-21 07:22:15.236561129 +0000 UTC m=+1676.466907041" watchObservedRunningTime="2026-01-21 07:22:15.246555667 +0000 UTC m=+1676.476901579" Jan 21 07:22:15 crc kubenswrapper[4893]: I0121 07:22:15.786217 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:15 crc kubenswrapper[4893]: I0121 07:22:15.951754 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:15 crc kubenswrapper[4893]: I0121 07:22:15.970517 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.023147 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.074994 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.083272 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.085324 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.088964 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.152409 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.152468 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6df\" (UniqueName: \"kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.152519 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqv8\" (UniqueName: \"kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.152560 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.169783 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x5d5s"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.189567 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x5d5s"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.214806 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.216149 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.228257 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.249746 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256132 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256209 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6df\" (UniqueName: \"kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256266 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqv8\" (UniqueName: \"kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256315 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256340 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wljg\" (UniqueName: \"kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.256428 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.257413 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.258637 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.269549 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5653-account-create-update-7rtr9"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.335352 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5653-account-create-update-7rtr9"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.364142 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wljg\" (UniqueName: \"kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.364573 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.365493 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.392485 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6df\" (UniqueName: \"kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df\") pod \"barbican-5653-account-create-update-ktlq8\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.417558 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.427455 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqv8\" (UniqueName: \"kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8\") pod \"root-account-create-update-j8ttn\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.490331 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wljg\" (UniqueName: \"kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg\") pod \"placement-d3cd-account-create-update-7l8fv\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.674579 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.675339 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.734057 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.734355 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="cinder-scheduler" containerID="cri-o://f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc" gracePeriod=30 Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.734899 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="probe" containerID="cri-o://79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe" gracePeriod=30 Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.759741 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.760016 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" containerName="openstackclient" containerID="cri-o://8efe8fb8b75568eba645314bce31b548eb596cda1bd127a11deb8d7d4c539845" gracePeriod=2 Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.812983 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.814414 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.821689 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.899931 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.928918 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.984260 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:16 crc kubenswrapper[4893]: E0121 07:22:16.984961 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" containerName="openstackclient" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.984987 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" containerName="openstackclient" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.985244 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" containerName="openstackclient" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.985991 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.989875 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:16 crc kubenswrapper[4893]: I0121 07:22:16.989995 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8pmv\" (UniqueName: \"kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.000228 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.092799 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.092913 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.093033 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tstqw\" (UniqueName: \"kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.093061 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8pmv\" (UniqueName: \"kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.103968 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.105662 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.109926 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:17 crc kubenswrapper[4893]: I0121 07:22:17.114205 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.195637 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tstqw\" (UniqueName: \"kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.196166 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.197516 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8pmv\" (UniqueName: \"kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv\") pod \"glance-eb0d-account-create-update-skmxs\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.204321 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.204358 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.218753 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.245698 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d3cd-account-create-update-xzcsc"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.381062 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.382431 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tstqw\" (UniqueName: \"kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw\") pod \"neutron-22df-account-create-update-mpgzf\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.383356 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.384096 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthqg\" (UniqueName: \"kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.384140 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.445401 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d3cd-account-create-update-xzcsc"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.476164 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.483816 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.484106 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api-log" containerID="cri-o://b233f3f10881d6ab9bfb3f123d866143df46653ea77405b1477d41577b5b9d37" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.484621 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api" containerID="cri-o://fb8af694018c30b6b38db1c567cc9a482101811cee291371c4cbd5248400b963" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.486031 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nthqg\" (UniqueName: \"kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.486067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:17.493373 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:17.493441 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data podName:89f70f50-3d66-4917-bfe2-1084a55e4eb9 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:17.993423052 +0000 UTC m=+1679.223768954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data") pod "rabbitmq-server-0" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9") : configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.495231 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.504965 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-eb0d-account-create-update-twv2l"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.544870 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nthqg\" (UniqueName: \"kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg\") pod \"cinder-2981-account-create-update-v76jn\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.553738 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-eb0d-account-create-update-twv2l"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.675516 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17944241-8ed8-4c71-a537-969b68cd694c" path="/var/lib/kubelet/pods/17944241-8ed8-4c71-a537-969b68cd694c/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.676412 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef43a6a-4efc-4bcd-820b-1eade6c9b094" path="/var/lib/kubelet/pods/1ef43a6a-4efc-4bcd-820b-1eade6c9b094/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.681907 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be46ad8-4b10-4fdf-8c34-e9003a28acbd" path="/var/lib/kubelet/pods/6be46ad8-4b10-4fdf-8c34-e9003a28acbd/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.682550 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19944b7-16bb-416c-8d31-c1bdc47a65b3" path="/var/lib/kubelet/pods/a19944b7-16bb-416c-8d31-c1bdc47a65b3/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.683185 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-22df-account-create-update-qdjbd"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.683210 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-22df-account-create-update-qdjbd"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.700875 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2981-account-create-update-bkdjc"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.758765 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.759540 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="openstack-network-exporter" containerID="cri-o://891656a23e552f4191c271c0656f4e5f186283f5a0a5cbf39b3a5d9a84777610" gracePeriod=300 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.779555 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.793797 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2981-account-create-update-bkdjc"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.863361 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.863632 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" containerID="cri-o://d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.864143 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="openstack-network-exporter" containerID="cri-o://85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.893055 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bwx97"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.907103 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ktqbh"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.924436 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bwx97"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.951762 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ktqbh"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.962763 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:17.963091 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="dnsmasq-dns" containerID="cri-o://818e24da18a15406b68a06d7381ee70499ab296ff97acccc455e382c7291d203" gracePeriod=10 Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.037243 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.037335 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data podName:89f70f50-3d66-4917-bfe2-1084a55e4eb9 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:19.037306747 +0000 UTC m=+1680.267652649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data") pod "rabbitmq-server-0" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9") : configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.079290 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qffvz"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.169109 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="ovsdbserver-sb" containerID="cri-o://be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" gracePeriod=300 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.189416 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.221716 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-m77l4"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.245534 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-m77l4"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.267776 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qffvz"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.320027 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.385737 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-m2n6w"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.433797 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-m2n6w"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.464903 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.465101 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-7s4fm" podUID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" containerName="openstack-network-exporter" containerID="cri-o://ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.477833 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerID="85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1" exitCode=2 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.477899 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerDied","Data":"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.487516 4893 generic.go:334] "Generic (PLEG): container finished" podID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerID="b233f3f10881d6ab9bfb3f123d866143df46653ea77405b1477d41577b5b9d37" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.487573 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerDied","Data":"b233f3f10881d6ab9bfb3f123d866143df46653ea77405b1477d41577b5b9d37"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.489745 4893 generic.go:334] "Generic (PLEG): container finished" podID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerID="818e24da18a15406b68a06d7381ee70499ab296ff97acccc455e382c7291d203" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.489790 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" event={"ID":"482b048f-92a3-485c-be9b-cc4d4bea116f","Type":"ContainerDied","Data":"818e24da18a15406b68a06d7381ee70499ab296ff97acccc455e382c7291d203"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.494366 4893 generic.go:334] "Generic (PLEG): container finished" podID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerID="891656a23e552f4191c271c0656f4e5f186283f5a0a5cbf39b3a5d9a84777610" exitCode=2 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.494391 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerDied","Data":"891656a23e552f4191c271c0656f4e5f186283f5a0a5cbf39b3a5d9a84777610"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.497742 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wrnx6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.513759 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wrnx6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.529831 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.530710 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="openstack-network-exporter" containerID="cri-o://48c749ca430629f3f11cef033f3e9982760ac3bbfd06d3297b7dfe8227939b80" gracePeriod=300 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.548847 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-daab-account-create-update-c75q5"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.581310 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.581627 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54745b6874-xnbrr" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-log" containerID="cri-o://b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.582162 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54745b6874-xnbrr" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-api" containerID="cri-o://e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.584104 4893 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-proxy-5795cc4cb5-6bsp7" secret="" err="secret \"swift-swift-dockercfg-mthsx\" not found" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.594906 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71 is running failed: container process not found" containerID="be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.595972 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71 is running failed: container process not found" containerID="be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.597261 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71 is running failed: container process not found" containerID="be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.597298 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="ovsdbserver-sb" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.603235 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-daab-account-create-update-c75q5"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.613793 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.614096 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-log" containerID="cri-o://cf025bb163e48ab531bc02302eeaab4063f97ae75eabc9949d6dec3d92a30857" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.614701 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-httpd" containerID="cri-o://2ed56fea6ed96fd765f43737ab0141951ab632e2d98acd1cb85189751d716818" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.639842 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.640353 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-server" containerID="cri-o://de3d17ba39098b400e960c4859abe64ec8453a5b7438c807895a08f576cb1c61" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.640917 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="swift-recon-cron" containerID="cri-o://6a87281c1caeb6e4039eec07e768f22f9b309659361f188928eda3e3a1dbb21a" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.640972 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="rsync" containerID="cri-o://7b5731c8b577d290be2d86e362e9bb9f2c16bff9031dd2e710aba07ac2ce04ed" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641010 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-expirer" containerID="cri-o://463d18ccd25d3b9dfd2bc47bf68e566d842db8f27cec0e30693b206ff7b49443" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641043 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-updater" containerID="cri-o://99f8c2ddbd19e36b260905f52c96953335f374caada62eaa5e2f0f5d967d416d" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641095 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-auditor" containerID="cri-o://8d08cf04d81866f12ef2bd434ff7bf4f3ff11d56786d98d0d1fac803ddd360ca" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641139 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-replicator" containerID="cri-o://f63f20cab196ddeafd343f3658285555d44901b431007768efc93ae2a8129f02" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641189 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-server" containerID="cri-o://bce00b11d38795c86e6d149154dd8aac8c079d3c7fa177fce0f83ff6166a6875" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641223 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-updater" containerID="cri-o://547e0cdd8689c56343dabedff5738e3f1d04a8d69d96acd746b136ec28002be6" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641264 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-auditor" containerID="cri-o://28f0c95878d18811a355a7d69ad8da527d18fc77022addfeaf2830eb6d3f6a58" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641308 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-replicator" containerID="cri-o://6c4ea7e3f7722a19ae4b7b9d432e39556b9257e17680f7182fa23f27573643bf" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641352 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-server" containerID="cri-o://651e96881878d275dfe2a4a1c62471fd4cf86d8d8127d90f3d7087add5021953" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641392 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-reaper" containerID="cri-o://10d753ac1428ba120d45a7811e9fca56f7ef4a1826bf444055d4ad6a929e369e" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641426 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-auditor" containerID="cri-o://6b3241ca824451ec282b6865606bf40f2795c9f27b6217a6c0357120a18a6e9b" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.641453 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-replicator" containerID="cri-o://f9ccf20497fe8d76385ede790b0446d55927de6fa28eb3b5854f288b82fc7991" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.655768 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-md8wl"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.667560 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="ovsdbserver-nb" containerID="cri-o://af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" gracePeriod=300 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.669660 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-25ctr"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.689194 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695202 4893 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695260 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:19.19524418 +0000 UTC m=+1680.425590082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695541 4893 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695555 4893 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695566 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695578 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5795cc4cb5-6bsp7: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:18.695617 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:19.195609571 +0000 UTC m=+1680.425955473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.699514 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.722747 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.898554 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-md8wl"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.901287 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-25ctr"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.919289 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.928698 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.935823 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f862-account-create-update-wdbxf"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.944341 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-24ht6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.953246 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f862-account-create-update-wdbxf"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.960187 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-24ht6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.971743 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.972045 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-log" containerID="cri-o://98a381bf3587dbe6a6decea70f6e5a06994af8d254a33bc9496fa0afb1283c8d" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.972632 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-httpd" containerID="cri-o://9af6af2cf0b6fc56ff8fff6040414d4c6371bd930a27e4d908e26718f4910e2e" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.975514 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.975736 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-577cb64ffc-m6fkr" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-api" containerID="cri-o://2c2e4963838a51923436692bec77d73dc438b926f3c5c0edc268cb6c72480f66" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.975831 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-577cb64ffc-m6fkr" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-httpd" containerID="cri-o://07919e653d69657ea7b011e6891aec998b0e961f74741efc99381bb2776ca73d" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.989410 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.989482 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9h9\" (UniqueName: \"kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.989821 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zsq5x"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:18.996331 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zsq5x"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.012665 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.023949 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-njghj"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.052556 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.079857 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.080728 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" containerID="cri-o://af2cbd2416ff8e2a96ecf8094812868e567e247c82f334bac61e2985c9c7061b" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.080827 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api" containerID="cri-o://7f7d2aeb9b4cbaf2e08372f0fc88c8fdf81814a1c30309f7310a68b860cbf2b7" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.096957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.097116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9h9\" (UniqueName: \"kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.097864 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.097950 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data podName:89f70f50-3d66-4917-bfe2-1084a55e4eb9 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:21.097917243 +0000 UTC m=+1682.328263145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data") pod "rabbitmq-server-0" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9") : configmap "rabbitmq-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.103050 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.172720 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9h9\" (UniqueName: \"kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9\") pod \"nova-api-d4ab-account-create-update-6nbg8\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.173062 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.173212 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-njghj"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.196577 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202063 4893 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202090 4893 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202099 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202111 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5795cc4cb5-6bsp7: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202157 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:20.202140788 +0000 UTC m=+1681.432486690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202539 4893 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.202564 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:20.2025565 +0000 UTC m=+1681.432902402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : secret "swift-proxy-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.211657 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.211744 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.243382 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.243632 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker-log" containerID="cri-o://c88d130cc82c49bf6ae1c611cdbaa9e2ce62ffa7e1d23413d4010afe63beedd5" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.244167 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker" containerID="cri-o://87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.292763 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.321626 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.339844 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-lzdts"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.343961 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d is running failed: container process not found" containerID="af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.352960 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d is running failed: container process not found" containerID="af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.354943 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d is running failed: container process not found" containerID="af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.354979 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="ovsdbserver-nb" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.370861 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-lzdts"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.402111 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.419780 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.420098 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener-log" containerID="cri-o://911526d6926efcd3de4bef0ee4d5862491c677b9a9b4639aa131893753ece29e" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.420647 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener" containerID="cri-o://186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.423255 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lqzk6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.460609 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" containerID="cri-o://641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" gracePeriod=29 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.618074 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15bb49fe-ded6-45cb-b094-05da46c3f9e8" path="/var/lib/kubelet/pods/15bb49fe-ded6-45cb-b094-05da46c3f9e8/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.618655 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae6eb33-b5b8-4ed9-a227-b96f365a49a3" path="/var/lib/kubelet/pods/3ae6eb33-b5b8-4ed9-a227-b96f365a49a3/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.619470 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4143e0f4-e3d5-44f7-aafd-38f977694010" path="/var/lib/kubelet/pods/4143e0f4-e3d5-44f7-aafd-38f977694010/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.619780 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68b3d1f1-4c78-4a98-afcb-a2db1753d676/ovsdbserver-nb/0.log" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.619844 4893 generic.go:334] "Generic (PLEG): container finished" podID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerID="48c749ca430629f3f11cef033f3e9982760ac3bbfd06d3297b7dfe8227939b80" exitCode=2 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.619912 4893 generic.go:334] "Generic (PLEG): container finished" podID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerID="af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.620121 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab997d7-7cc4-49d1-bb60-9459e0d838e2" path="/var/lib/kubelet/pods/5ab997d7-7cc4-49d1-bb60-9459e0d838e2/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.621209 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619" path="/var/lib/kubelet/pods/5db728ce-a72e-4f2e-9ed0-0a7c0c3dd619/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.621906 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828f06a8-358b-486a-9339-520cba2baf52" path="/var/lib/kubelet/pods/828f06a8-358b-486a-9339-520cba2baf52/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.622634 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2b7c6d-f80f-4ae0-9628-30dd29e491fe" path="/var/lib/kubelet/pods/9a2b7c6d-f80f-4ae0-9628-30dd29e491fe/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.625583 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b624b5ac-d2e6-442d-8411-656210764688" path="/var/lib/kubelet/pods/b624b5ac-d2e6-442d-8411-656210764688/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.626240 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1" path="/var/lib/kubelet/pods/c8f4d459-c1f7-43e5-a648-e0ee4bf83fb1/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.626986 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caef5d13-59d3-4ca2-b6c9-77b9616a91c8" path="/var/lib/kubelet/pods/caef5d13-59d3-4ca2-b6c9-77b9616a91c8/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.628274 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab58e44-b25e-4390-b604-ea1e17365c8e" path="/var/lib/kubelet/pods/dab58e44-b25e-4390-b604-ea1e17365c8e/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.628359 4893 generic.go:334] "Generic (PLEG): container finished" podID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerID="c88d130cc82c49bf6ae1c611cdbaa9e2ce62ffa7e1d23413d4010afe63beedd5" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.628897 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb91341-a678-4bd6-96a9-8bad10274b2c" path="/var/lib/kubelet/pods/ebb91341-a678-4bd6-96a9-8bad10274b2c/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.631565 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede1b41d-0fd3-4c19-ba5d-bcfee1482f94" path="/var/lib/kubelet/pods/ede1b41d-0fd3-4c19-ba5d-bcfee1482f94/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.632204 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3284c32-3995-4e0e-a6ee-15a79317eaab" path="/var/lib/kubelet/pods/f3284c32-3995-4e0e-a6ee-15a79317eaab/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.632826 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea804eb-300b-451e-aa99-99ff7ed06070" path="/var/lib/kubelet/pods/fea804eb-300b-451e-aa99-99ff7ed06070/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.633764 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff7b3eb-e8c3-4d58-932b-3738b1e8dffa" path="/var/lib/kubelet/pods/fff7b3eb-e8c3-4d58-932b-3738b1e8dffa/volumes" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.634854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerDied","Data":"48c749ca430629f3f11cef033f3e9982760ac3bbfd06d3297b7dfe8227939b80"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.634882 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lqzk6"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.634902 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerDied","Data":"af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.634913 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerDied","Data":"c88d130cc82c49bf6ae1c611cdbaa9e2ce62ffa7e1d23413d4010afe63beedd5"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.645598 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerDied","Data":"98a381bf3587dbe6a6decea70f6e5a06994af8d254a33bc9496fa0afb1283c8d"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.645462 4893 generic.go:334] "Generic (PLEG): container finished" podID="45545422-414a-433a-9de9-fbfb6e03add3" containerID="98a381bf3587dbe6a6decea70f6e5a06994af8d254a33bc9496fa0afb1283c8d" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.647067 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.647429 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" containerID="cri-o://2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.663003 4893 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 07:22:19 crc kubenswrapper[4893]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 07:22:19 crc kubenswrapper[4893]: + source /usr/local/bin/container-scripts/functions Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNBridge=br-int Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNRemote=tcp:localhost:6642 Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNEncapType=geneve Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNAvailabilityZones= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ EnableChassisAsGateway=true Jan 21 07:22:19 crc kubenswrapper[4893]: ++ PhysicalNetworks= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNHostName= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 07:22:19 crc kubenswrapper[4893]: ++ ovs_dir=/var/lib/openvswitch Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 07:22:19 crc kubenswrapper[4893]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + cleanup_ovsdb_server_semaphore Jan 21 07:22:19 crc kubenswrapper[4893]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 07:22:19 crc kubenswrapper[4893]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-zvt96" message=< Jan 21 07:22:19 crc kubenswrapper[4893]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 07:22:19 crc kubenswrapper[4893]: + source /usr/local/bin/container-scripts/functions Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNBridge=br-int Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNRemote=tcp:localhost:6642 Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNEncapType=geneve Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNAvailabilityZones= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ EnableChassisAsGateway=true Jan 21 07:22:19 crc kubenswrapper[4893]: ++ PhysicalNetworks= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNHostName= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 07:22:19 crc kubenswrapper[4893]: ++ ovs_dir=/var/lib/openvswitch Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 07:22:19 crc kubenswrapper[4893]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + cleanup_ovsdb_server_semaphore Jan 21 07:22:19 crc kubenswrapper[4893]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 07:22:19 crc kubenswrapper[4893]: > Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.663054 4893 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 07:22:19 crc kubenswrapper[4893]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 07:22:19 crc kubenswrapper[4893]: + source /usr/local/bin/container-scripts/functions Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNBridge=br-int Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNRemote=tcp:localhost:6642 Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNEncapType=geneve Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNAvailabilityZones= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ EnableChassisAsGateway=true Jan 21 07:22:19 crc kubenswrapper[4893]: ++ PhysicalNetworks= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ OVNHostName= Jan 21 07:22:19 crc kubenswrapper[4893]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 07:22:19 crc kubenswrapper[4893]: ++ ovs_dir=/var/lib/openvswitch Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 07:22:19 crc kubenswrapper[4893]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 07:22:19 crc kubenswrapper[4893]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + sleep 0.5 Jan 21 07:22:19 crc kubenswrapper[4893]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 07:22:19 crc kubenswrapper[4893]: + cleanup_ovsdb_server_semaphore Jan 21 07:22:19 crc kubenswrapper[4893]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 07:22:19 crc kubenswrapper[4893]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 07:22:19 crc kubenswrapper[4893]: > pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" containerID="cri-o://ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.663093 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" containerID="cri-o://ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" gracePeriod=29 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.671876 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-t7q6q"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.681844 4893 generic.go:334] "Generic (PLEG): container finished" podID="63916786-c676-4695-84a1-3d3be685de16" containerID="cf025bb163e48ab531bc02302eeaab4063f97ae75eabc9949d6dec3d92a30857" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.681965 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerDied","Data":"cf025bb163e48ab531bc02302eeaab4063f97ae75eabc9949d6dec3d92a30857"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.698843 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7s4fm_12c05c26-e0c2-4516-9fa6-8dc8779d1430/openstack-network-exporter/0.log" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.698908 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.700577 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.700763 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-t7q6q"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.702499 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3a81ba3d-1493-421c-b0f8-40a16ed8cec8/ovsdbserver-sb/0.log" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.702542 4893 generic.go:334] "Generic (PLEG): container finished" podID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerID="be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.702589 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerDied","Data":"be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.702615 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a81ba3d-1493-421c-b0f8-40a16ed8cec8","Type":"ContainerDied","Data":"a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.702626 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2924178b99efbadd09b77faff24ee66e837b76bb0586668fa8de3acd0ebb6a4" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.705994 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.706364 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.707619 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3a81ba3d-1493-421c-b0f8-40a16ed8cec8/ovsdbserver-sb/0.log" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.707690 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.708029 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.708062 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717309 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="7b5731c8b577d290be2d86e362e9bb9f2c16bff9031dd2e710aba07ac2ce04ed" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717345 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="463d18ccd25d3b9dfd2bc47bf68e566d842db8f27cec0e30693b206ff7b49443" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717353 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="99f8c2ddbd19e36b260905f52c96953335f374caada62eaa5e2f0f5d967d416d" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717361 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="8d08cf04d81866f12ef2bd434ff7bf4f3ff11d56786d98d0d1fac803ddd360ca" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717369 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="f63f20cab196ddeafd343f3658285555d44901b431007768efc93ae2a8129f02" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717375 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="bce00b11d38795c86e6d149154dd8aac8c079d3c7fa177fce0f83ff6166a6875" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717381 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="547e0cdd8689c56343dabedff5738e3f1d04a8d69d96acd746b136ec28002be6" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717392 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="28f0c95878d18811a355a7d69ad8da527d18fc77022addfeaf2830eb6d3f6a58" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717398 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="6c4ea7e3f7722a19ae4b7b9d432e39556b9257e17680f7182fa23f27573643bf" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717404 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="651e96881878d275dfe2a4a1c62471fd4cf86d8d8127d90f3d7087add5021953" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717411 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="10d753ac1428ba120d45a7811e9fca56f7ef4a1826bf444055d4ad6a929e369e" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717417 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="6b3241ca824451ec282b6865606bf40f2795c9f27b6217a6c0357120a18a6e9b" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717424 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="f9ccf20497fe8d76385ede790b0446d55927de6fa28eb3b5854f288b82fc7991" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717429 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="de3d17ba39098b400e960c4859abe64ec8453a5b7438c807895a08f576cb1c61" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717477 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"7b5731c8b577d290be2d86e362e9bb9f2c16bff9031dd2e710aba07ac2ce04ed"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717524 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"463d18ccd25d3b9dfd2bc47bf68e566d842db8f27cec0e30693b206ff7b49443"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717540 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"99f8c2ddbd19e36b260905f52c96953335f374caada62eaa5e2f0f5d967d416d"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717549 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"8d08cf04d81866f12ef2bd434ff7bf4f3ff11d56786d98d0d1fac803ddd360ca"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717561 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"f63f20cab196ddeafd343f3658285555d44901b431007768efc93ae2a8129f02"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717569 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"bce00b11d38795c86e6d149154dd8aac8c079d3c7fa177fce0f83ff6166a6875"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717577 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"547e0cdd8689c56343dabedff5738e3f1d04a8d69d96acd746b136ec28002be6"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"28f0c95878d18811a355a7d69ad8da527d18fc77022addfeaf2830eb6d3f6a58"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717593 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"6c4ea7e3f7722a19ae4b7b9d432e39556b9257e17680f7182fa23f27573643bf"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717603 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"651e96881878d275dfe2a4a1c62471fd4cf86d8d8127d90f3d7087add5021953"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717611 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"10d753ac1428ba120d45a7811e9fca56f7ef4a1826bf444055d4ad6a929e369e"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717620 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"6b3241ca824451ec282b6865606bf40f2795c9f27b6217a6c0357120a18a6e9b"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717628 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"f9ccf20497fe8d76385ede790b0446d55927de6fa28eb3b5854f288b82fc7991"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.717635 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"de3d17ba39098b400e960c4859abe64ec8453a5b7438c807895a08f576cb1c61"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.719477 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.719753 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" containerID="cri-o://e5a2c7e9e8afbc7e00e8fde7ad874e7b56174cc0c7a9869b437318952fda7126" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.719813 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" containerID="cri-o://357413c2169766654a3f84ffb51b7dca2610fa69e2e67bc3239b3491d881ff66" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.736227 4893 generic.go:334] "Generic (PLEG): container finished" podID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerID="af2cbd2416ff8e2a96ecf8094812868e567e247c82f334bac61e2985c9c7061b" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.736314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerDied","Data":"af2cbd2416ff8e2a96ecf8094812868e567e247c82f334bac61e2985c9c7061b"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.742065 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.753284 4893 generic.go:334] "Generic (PLEG): container finished" podID="93e402d6-b354-4755-83c3-68e43e53c19b" containerID="8efe8fb8b75568eba645314bce31b548eb596cda1bd127a11deb8d7d4c539845" exitCode=137 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.758853 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7s4fm_12c05c26-e0c2-4516-9fa6-8dc8779d1430/openstack-network-exporter/0.log" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.758909 4893 generic.go:334] "Generic (PLEG): container finished" podID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" containerID="ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3" exitCode=2 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.758960 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s4fm" event={"ID":"12c05c26-e0c2-4516-9fa6-8dc8779d1430","Type":"ContainerDied","Data":"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.758987 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s4fm" event={"ID":"12c05c26-e0c2-4516-9fa6-8dc8779d1430","Type":"ContainerDied","Data":"a0c883ea147207c7acdcc368c0b922ec3e2d5ab7263c63e60558b14755d7b918"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.759007 4893 scope.go:117] "RemoveContainer" containerID="ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.759136 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s4fm" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.760404 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768726 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768783 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768815 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768870 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768915 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768936 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768952 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.768983 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769005 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769031 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jplt\" (UniqueName: \"kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769047 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769081 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769098 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769120 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769160 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9pxs\" (UniqueName: \"kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769178 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb\") pod \"482b048f-92a3-485c-be9b-cc4d4bea116f\" (UID: \"482b048f-92a3-485c-be9b-cc4d4bea116f\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769193 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4shq\" (UniqueName: \"kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769234 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs\") pod \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\" (UID: \"3a81ba3d-1493-421c-b0f8-40a16ed8cec8\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769255 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir\") pod \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\" (UID: \"12c05c26-e0c2-4516-9fa6-8dc8779d1430\") " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769306 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.769562 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.770096 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config" (OuterVolumeSpecName: "config") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.770158 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.771485 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerID="79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe" exitCode=0 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.771570 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerDied","Data":"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.775378 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts" (OuterVolumeSpecName: "scripts") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.776275 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config" (OuterVolumeSpecName: "config") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.776534 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.781086 4893 generic.go:334] "Generic (PLEG): container finished" podID="d547505a-34d0-4645-9690-74df58728a46" containerID="b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58" exitCode=143 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.781157 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerDied","Data":"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.783768 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt" (OuterVolumeSpecName: "kube-api-access-2jplt") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "kube-api-access-2jplt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.785545 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.787050 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-httpd" containerID="cri-o://18e9d45b37e8d84945f0132ccb26b8b828ad2ef4ebd71d0f862ce04dc0922db6" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.787822 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.788347 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-ghmq8" event={"ID":"482b048f-92a3-485c-be9b-cc4d4bea116f","Type":"ContainerDied","Data":"a6803afe47d994749b2500b06ed246d8a11ed740344d3c1936e7c0837e5f3975"} Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.788663 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-server" containerID="cri-o://ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.808636 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs" (OuterVolumeSpecName: "kube-api-access-d9pxs") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "kube-api-access-d9pxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.814945 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nzf4p"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.862867 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.864183 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq" (OuterVolumeSpecName: "kube-api-access-k4shq") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "kube-api-access-k4shq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.865820 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.866075 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.870997 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871023 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871032 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c05c26-e0c2-4516-9fa6-8dc8779d1430-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871042 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jplt\" (UniqueName: \"kubernetes.io/projected/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-kube-api-access-2jplt\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871055 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871064 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9pxs\" (UniqueName: \"kubernetes.io/projected/482b048f-92a3-485c-be9b-cc4d4bea116f-kube-api-access-d9pxs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871072 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4shq\" (UniqueName: \"kubernetes.io/projected/12c05c26-e0c2-4516-9fa6-8dc8779d1430-kube-api-access-k4shq\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871080 4893 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12c05c26-e0c2-4516-9fa6-8dc8779d1430-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.871106 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.888807 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.896506 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.896780 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.900636 4893 scope.go:117] "RemoveContainer" containerID="ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.902512 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config" (OuterVolumeSpecName: "config") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.902709 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3\": container with ID starting with ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3 not found: ID does not exist" containerID="ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.902751 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3"} err="failed to get container status \"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3\": rpc error: code = NotFound desc = could not find container \"ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3\": container with ID starting with ec859e2f4b54ea1bd111ee0644b59235a785cef3079afdc21c159f2f62d9e0d3 not found: ID does not exist" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.902779 4893 scope.go:117] "RemoveContainer" containerID="818e24da18a15406b68a06d7381ee70499ab296ff97acccc455e382c7291d203" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.906089 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nzf4p"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.907830 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.920231 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.931178 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hjxfg"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.931272 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hjxfg"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.934597 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.939900 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.941255 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-znqnp"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.956386 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.956727 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-log" containerID="cri-o://049e37b8b5a580dac1053dd24aa63d0528098d200b566e60ab78bd88f14de585" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.956889 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-api" containerID="cri-o://31c581f13004a2f7815d44365eb034baed7c66ac483f7fa7c22317077d696c9a" gracePeriod=30 Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.975176 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.976106 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-znqnp"] Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977438 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977459 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977469 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977478 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977487 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977496 4893 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.977504 4893 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:19 crc kubenswrapper[4893]: I0121 07:22:19.989182 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mzrmf"] Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.989296 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:19 crc kubenswrapper[4893]: E0121 07:22:19.989363 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data podName:fdb40d40-7926-424a-810d-3b6f77e1022f nodeName:}" failed. No retries permitted until 2026-01-21 07:22:20.489342799 +0000 UTC m=+1681.719688701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data") pod "rabbitmq-cell1-server-0" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.020050 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "482b048f-92a3-485c-be9b-cc4d4bea116f" (UID: "482b048f-92a3-485c-be9b-cc4d4bea116f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.026661 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "12c05c26-e0c2-4516-9fa6-8dc8779d1430" (UID: "12c05c26-e0c2-4516-9fa6-8dc8779d1430"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.032848 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.033285 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.036025 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mzrmf"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.044491 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="galera" containerID="cri-o://7c6d4673c3549715ec53ab38c378a4c139ad12463137e1030d564c833b09d3f2" gracePeriod=30 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.046700 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68b3d1f1-4c78-4a98-afcb-a2db1753d676/ovsdbserver-nb/0.log" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.046782 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.051035 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.051285 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerName="nova-cell1-conductor-conductor" containerID="cri-o://805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" gracePeriod=30 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.072086 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.097362 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12c05c26-e0c2-4516-9fa6-8dc8779d1430-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.103461 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.103509 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/482b048f-92a3-485c-be9b-cc4d4bea116f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.140143 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jppdg"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.162601 4893 scope.go:117] "RemoveContainer" containerID="694ad6afb1b9ec0a80b08162277bfaec0d3f9842ddae8e178a429d664a54ec5c" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.189609 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "3a81ba3d-1493-421c-b0f8-40a16ed8cec8" (UID: "3a81ba3d-1493-421c-b0f8-40a16ed8cec8"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208201 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jppdg"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208404 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208495 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle\") pod \"93e402d6-b354-4755-83c3-68e43e53c19b\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208523 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208540 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208556 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsl9c\" (UniqueName: \"kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c\") pod \"93e402d6-b354-4755-83c3-68e43e53c19b\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208585 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config\") pod \"93e402d6-b354-4755-83c3-68e43e53c19b\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208606 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rr4r\" (UniqueName: \"kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208729 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret\") pod \"93e402d6-b354-4755-83c3-68e43e53c19b\" (UID: \"93e402d6-b354-4755-83c3-68e43e53c19b\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208751 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208776 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.208797 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs\") pod \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\" (UID: \"68b3d1f1-4c78-4a98-afcb-a2db1753d676\") " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.209085 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a81ba3d-1493-421c-b0f8-40a16ed8cec8-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.209173 4893 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.209186 4893 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.209196 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.209208 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5795cc4cb5-6bsp7: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.209247 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:22.20923364 +0000 UTC m=+1683.439579542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.210348 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" containerID="cri-o://f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa" gracePeriod=604800 Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.211283 4893 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.211373 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:22.211351961 +0000 UTC m=+1683.441697863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : secret "swift-proxy-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.212131 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config" (OuterVolumeSpecName: "config") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.212696 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.213500 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts" (OuterVolumeSpecName: "scripts") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.221255 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c" (OuterVolumeSpecName: "kube-api-access-tsl9c") pod "93e402d6-b354-4755-83c3-68e43e53c19b" (UID: "93e402d6-b354-4755-83c3-68e43e53c19b"). InnerVolumeSpecName "kube-api-access-tsl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.242083 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r" (OuterVolumeSpecName: "kube-api-access-2rr4r") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "kube-api-access-2rr4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.243527 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.250450 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.256101 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "93e402d6-b354-4755-83c3-68e43e53c19b" (UID: "93e402d6-b354-4755-83c3-68e43e53c19b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355444 4893 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355499 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355509 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rr4r\" (UniqueName: \"kubernetes.io/projected/68b3d1f1-4c78-4a98-afcb-a2db1753d676-kube-api-access-2rr4r\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355519 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355546 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355556 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68b3d1f1-4c78-4a98-afcb-a2db1753d676-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.355565 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsl9c\" (UniqueName: \"kubernetes.io/projected/93e402d6-b354-4755-83c3-68e43e53c19b-kube-api-access-tsl9c\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.363634 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.372591 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.397868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93e402d6-b354-4755-83c3-68e43e53c19b" (UID: "93e402d6-b354-4755-83c3-68e43e53c19b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.404577 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.433798 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.437997 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.450546 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.450819 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.451995 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.452030 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.480816 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.480863 4893 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.480875 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.480886 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.481834 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.484431 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "93e402d6-b354-4755-83c3-68e43e53c19b" (UID: "93e402d6-b354-4755-83c3-68e43e53c19b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.488741 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.499825 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.499902 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.508457 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "68b3d1f1-4c78-4a98-afcb-a2db1753d676" (UID: "68b3d1f1-4c78-4a98-afcb-a2db1753d676"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.513810 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.522750 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-7s4fm"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.532719 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.540720 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-ghmq8"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.549726 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.559081 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.570939 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.577000 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.583018 4893 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/93e402d6-b354-4755-83c3-68e43e53c19b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.583049 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b3d1f1-4c78-4a98-afcb-a2db1753d676-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.583113 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.583171 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data podName:fdb40d40-7926-424a-810d-3b6f77e1022f nodeName:}" failed. No retries permitted until 2026-01-21 07:22:21.583149693 +0000 UTC m=+1682.813495595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data") pod "rabbitmq-cell1-server-0" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.583200 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.593257 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.601031 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.631858 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.634272 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.634687 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.638821 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "barbican" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="barbican" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.639397 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.640229 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5653-account-create-update-ktlq8" podUID="fa419127-b439-47a0-9b9c-535529d4f7d9" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.641274 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-j8ttn" podUID="bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.645856 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "placement" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="placement" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.649405 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-d3cd-account-create-update-7l8fv" podUID="cac277e8-b27c-4412-b25e-0c988c2e5555" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.697378 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.698695 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.698965 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.706264 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "neutron" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="neutron" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.706361 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "glance" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="glance" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.706447 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "cinder" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="cinder" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.707961 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-eb0d-account-create-update-skmxs" podUID="d8ccfdb6-50d2-4718-a4d4-20366e02f93f" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.707971 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-2981-account-create-update-v76jn" podUID="1236d7dc-6a98-4d59-8a88-f3101bd017ef" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.708031 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-22df-account-create-update-mpgzf" podUID="0622bcb2-e8ab-4e4b-a33f-64e48320b232" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.783062 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.801916 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.818396 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.818478 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.826044 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerDied","Data":"186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.825655 4893 generic.go:334] "Generic (PLEG): container finished" podID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerID="186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4" exitCode=0 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.826183 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerDied","Data":"911526d6926efcd3de4bef0ee4d5862491c677b9a9b4639aa131893753ece29e"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.826205 4893 generic.go:334] "Generic (PLEG): container finished" podID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerID="911526d6926efcd3de4bef0ee4d5862491c677b9a9b4639aa131893753ece29e" exitCode=143 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.829904 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-skmxs" event={"ID":"d8ccfdb6-50d2-4718-a4d4-20366e02f93f","Type":"ContainerStarted","Data":"d8d58e67a67c9230ae61cf46fd32cddba5786e837cb8745d0c8f32acbaa89cf2"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.854473 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.865625 4893 generic.go:334] "Generic (PLEG): container finished" podID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerID="e5a2c7e9e8afbc7e00e8fde7ad874e7b56174cc0c7a9869b437318952fda7126" exitCode=143 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.865757 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerDied","Data":"e5a2c7e9e8afbc7e00e8fde7ad874e7b56174cc0c7a9869b437318952fda7126"} Jan 21 07:22:20 crc kubenswrapper[4893]: W0121 07:22:20.905650 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187a5c9a_e642_4826_8543_f53fd0789757.slice/crio-f1727d3dceafb15809e9cd2a8bdc7b267e08952db1a7092425588fb58ef1c698 WatchSource:0}: Error finding container f1727d3dceafb15809e9cd2a8bdc7b267e08952db1a7092425588fb58ef1c698: Status 404 returned error can't find the container with id f1727d3dceafb15809e9cd2a8bdc7b267e08952db1a7092425588fb58ef1c698 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.906652 4893 generic.go:334] "Generic (PLEG): container finished" podID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" exitCode=0 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.906765 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerDied","Data":"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.918622 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.927223 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:20 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: if [ -n "nova_api" ]; then Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="nova_api" Jan 21 07:22:20 crc kubenswrapper[4893]: else Jan 21 07:22:20 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:20 crc kubenswrapper[4893]: fi Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:20 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:20 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:20 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:20 crc kubenswrapper[4893]: # support updates Jan 21 07:22:20 crc kubenswrapper[4893]: Jan 21 07:22:20 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:20 crc kubenswrapper[4893]: E0121 07:22:20.929610 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" podUID="187a5c9a-e642-4826-8543-f53fd0789757" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.931913 4893 generic.go:334] "Generic (PLEG): container finished" podID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerID="049e37b8b5a580dac1053dd24aa63d0528098d200b566e60ab78bd88f14de585" exitCode=143 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.931979 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerDied","Data":"049e37b8b5a580dac1053dd24aa63d0528098d200b566e60ab78bd88f14de585"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.935151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-ktlq8" event={"ID":"fa419127-b439-47a0-9b9c-535529d4f7d9","Type":"ContainerStarted","Data":"23ce942b19d45c58c6bfdd048352e07362b6dfb026ef1c8d24ee462c695d997a"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.949101 4893 generic.go:334] "Generic (PLEG): container finished" podID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerID="87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300" exitCode=0 Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.949209 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerDied","Data":"87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300"} Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.994028 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:22:20 crc kubenswrapper[4893]: I0121 07:22:20.995309 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-7l8fv" event={"ID":"cac277e8-b27c-4412-b25e-0c988c2e5555","Type":"ContainerStarted","Data":"74bc98aeee7ea889f856ffaa6455514c991316622c27dfb2061a76ac1456fd15"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.006171 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" containerID="b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.006257 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6","Type":"ContainerDied","Data":"b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.007940 4893 scope.go:117] "RemoveContainer" containerID="8efe8fb8b75568eba645314bce31b548eb596cda1bd127a11deb8d7d4c539845" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.008082 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.016300 4893 generic.go:334] "Generic (PLEG): container finished" podID="133bbed0-7073-43ad-881b-893cf8529bb2" containerID="07919e653d69657ea7b011e6891aec998b0e961f74741efc99381bb2776ca73d" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.016332 4893 generic.go:334] "Generic (PLEG): container finished" podID="133bbed0-7073-43ad-881b-893cf8529bb2" containerID="2c2e4963838a51923436692bec77d73dc438b926f3c5c0edc268cb6c72480f66" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.016381 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerDied","Data":"07919e653d69657ea7b011e6891aec998b0e961f74741efc99381bb2776ca73d"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.016412 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerDied","Data":"2c2e4963838a51923436692bec77d73dc438b926f3c5c0edc268cb6c72480f66"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.017176 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.017633 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020379 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020407 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020414 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020441 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="cinder-scheduler" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020447 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="cinder-scheduler" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020459 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="init" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020465 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="init" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020487 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="ovsdbserver-sb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020492 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="ovsdbserver-sb" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020514 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="probe" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020520 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="probe" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020533 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="dnsmasq-dns" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020539 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="dnsmasq-dns" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020572 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020578 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.020594 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="ovsdbserver-nb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020600 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="ovsdbserver-nb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020864 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020874 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="cinder-scheduler" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020884 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020900 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" containerName="ovsdbserver-sb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020908 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="openstack-network-exporter" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020927 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerName="probe" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020938 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" containerName="dnsmasq-dns" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.020950 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" containerName="ovsdbserver-nb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.029375 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.037333 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.039893 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8ttn" event={"ID":"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc","Type":"ContainerStarted","Data":"eaea2786effedd164b24ef4b54289baff79c253ef8e82274c60d7012734701fc"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.054112 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22df-account-create-update-mpgzf" event={"ID":"0622bcb2-e8ab-4e4b-a33f-64e48320b232","Type":"ContainerStarted","Data":"f870a2b7fc095515880f0a13f40cb9fe28e027b6800fece2efd4e24b992d398c"} Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.057045 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96d1b606_34ad_4e36_ad61_6db3b4a7c3e1.slice/crio-186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a1a3b4_4137_4a6e_b8d3_20518f38a2d6.slice/crio-b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a1a3b4_4137_4a6e_b8d3_20518f38a2d6.slice/crio-conmon-b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2caca0fd_0f3f_4725_a196_04463abed671.slice/crio-ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c20f882_3bde_49a2_857e_207fe47d5aae.slice/crio-conmon-87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c20f882_3bde_49a2_857e_207fe47d5aae.slice/crio-87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96d1b606_34ad_4e36_ad61_6db3b4a7c3e1.slice/crio-conmon-186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e402d6_b354_4755_83c3_68e43e53c19b.slice/crio-c59b89b88484d4154638feb373806189e34c182d630c74cea3287f12a80483f3\": RecentStats: unable to find data in memory cache]" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.230275 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.239747 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-874pr\" (UniqueName: \"kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.240063 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.240314 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.240390 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data podName:89f70f50-3d66-4917-bfe2-1084a55e4eb9 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:25.240367446 +0000 UTC m=+1686.470713348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data") pod "rabbitmq-server-0" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9") : configmap "rabbitmq-config-data" not found Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.242987 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68b3d1f1-4c78-4a98-afcb-a2db1753d676/ovsdbserver-nb/0.log" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.243300 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.243413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68b3d1f1-4c78-4a98-afcb-a2db1753d676","Type":"ContainerDied","Data":"d7811524884608c187772891582746d02abbb30dda996fa538e08956e33be2a8"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.243459 4893 scope.go:117] "RemoveContainer" containerID="48c749ca430629f3f11cef033f3e9982760ac3bbfd06d3297b7dfe8227939b80" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.253095 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-v76jn" event={"ID":"1236d7dc-6a98-4d59-8a88-f3101bd017ef","Type":"ContainerStarted","Data":"d392526f25412cb53a8009265132184a79b45539fd71b3ab1acbddd77804eed3"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.261826 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4ecaeda-4211-4680-b408-cf7e4717d723" containerID="f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.261955 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerDied","Data":"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.261990 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4ecaeda-4211-4680-b408-cf7e4717d723","Type":"ContainerDied","Data":"e04733afa92f81e4514ffbdbce2aed0c69c0a5b7788c9fc19a56b94c455304e3"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.262118 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.270572 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerDied","Data":"ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.270814 4893 generic.go:334] "Generic (PLEG): container finished" podID="2caca0fd-0f3f-4725-a196-04463abed671" containerID="ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.270855 4893 generic.go:334] "Generic (PLEG): container finished" podID="2caca0fd-0f3f-4725-a196-04463abed671" containerID="18e9d45b37e8d84945f0132ccb26b8b828ad2ef4ebd71d0f862ce04dc0922db6" exitCode=0 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.270938 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerDied","Data":"18e9d45b37e8d84945f0132ccb26b8b828ad2ef4ebd71d0f862ce04dc0922db6"} Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.272919 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.297597 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.298015 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.302007 4893 scope.go:117] "RemoveContainer" containerID="af5bb0f25a6013996cf95b397b7fa8ce33547b30c013d3efe237da97c44f553d" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.324687 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="rabbitmq" containerID="cri-o://fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae" gracePeriod=604800 Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.342992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.343061 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.343176 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.343207 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mtg\" (UniqueName: \"kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.343297 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.343323 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts\") pod \"f4ecaeda-4211-4680-b408-cf7e4717d723\" (UID: \"f4ecaeda-4211-4680-b408-cf7e4717d723\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.345171 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.345304 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.345550 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-874pr\" (UniqueName: \"kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.345844 4893 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4ecaeda-4211-4680-b408-cf7e4717d723-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.359017 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.359244 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.372618 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-874pr\" (UniqueName: \"kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr\") pod \"root-account-create-update-pvg9n\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.374147 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg" (OuterVolumeSpecName: "kube-api-access-27mtg") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "kube-api-access-27mtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.374798 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts" (OuterVolumeSpecName: "scripts") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.408694 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.430930 4893 scope.go:117] "RemoveContainer" containerID="79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.431170 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.446755 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.451545 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.451832 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mtg\" (UniqueName: \"kubernetes.io/projected/f4ecaeda-4211-4680-b408-cf7e4717d723-kube-api-access-27mtg\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.451841 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.459382 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.463952 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.468035 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.468921 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.528583 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.552283 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data" (OuterVolumeSpecName: "config-data") pod "f4ecaeda-4211-4680-b408-cf7e4717d723" (UID: "f4ecaeda-4211-4680-b408-cf7e4717d723"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.553586 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.553612 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ecaeda-4211-4680-b408-cf7e4717d723-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.594032 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5b0846-7bdc-4019-bc94-ea4253cc9c8a" path="/var/lib/kubelet/pods/0b5b0846-7bdc-4019-bc94-ea4253cc9c8a/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.594599 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c05c26-e0c2-4516-9fa6-8dc8779d1430" path="/var/lib/kubelet/pods/12c05c26-e0c2-4516-9fa6-8dc8779d1430/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.595373 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a81ba3d-1493-421c-b0f8-40a16ed8cec8" path="/var/lib/kubelet/pods/3a81ba3d-1493-421c-b0f8-40a16ed8cec8/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.596560 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482b048f-92a3-485c-be9b-cc4d4bea116f" path="/var/lib/kubelet/pods/482b048f-92a3-485c-be9b-cc4d4bea116f/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.597176 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552962c7-46c0-4f3f-826e-3c99b06f6c61" path="/var/lib/kubelet/pods/552962c7-46c0-4f3f-826e-3c99b06f6c61/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.598519 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca5c8f3-90dc-4b2e-989d-0448280e2e48" path="/var/lib/kubelet/pods/5ca5c8f3-90dc-4b2e-989d-0448280e2e48/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.599791 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b3d1f1-4c78-4a98-afcb-a2db1753d676" path="/var/lib/kubelet/pods/68b3d1f1-4c78-4a98-afcb-a2db1753d676/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.600617 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d23b48-e5b2-4154-87a6-1fef70653056" path="/var/lib/kubelet/pods/68d23b48-e5b2-4154-87a6-1fef70653056/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.601384 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e402d6-b354-4755-83c3-68e43e53c19b" path="/var/lib/kubelet/pods/93e402d6-b354-4755-83c3-68e43e53c19b/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.602847 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9456679d-3a66-4b28-b43b-be72ca19d835" path="/var/lib/kubelet/pods/9456679d-3a66-4b28-b43b-be72ca19d835/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.603617 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9e755b-1a33-409b-b4b7-926bcfecb0b5" path="/var/lib/kubelet/pods/9f9e755b-1a33-409b-b4b7-926bcfecb0b5/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.604306 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e" path="/var/lib/kubelet/pods/aa87df83-9b5d-4ca4-9fd1-b16a3a6cc31e/volumes" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.635047 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.656731 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwc8v\" (UniqueName: \"kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.657369 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.657404 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.657494 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.657597 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.658463 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.658530 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data podName:fdb40d40-7926-424a-810d-3b6f77e1022f nodeName:}" failed. No retries permitted until 2026-01-21 07:22:23.658512055 +0000 UTC m=+1684.888857957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data") pod "rabbitmq-cell1-server-0" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.661731 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.662543 4893 scope.go:117] "RemoveContainer" containerID="f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.665375 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v" (OuterVolumeSpecName: "kube-api-access-kwc8v") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "kube-api-access-kwc8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.675243 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.686614 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.711919 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.714379 4893 scope.go:117] "RemoveContainer" containerID="79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.715495 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.724169 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.724207 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe\": container with ID starting with 79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe not found: ID does not exist" containerID="79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.724389 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe"} err="failed to get container status \"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe\": rpc error: code = NotFound desc = could not find container \"79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe\": container with ID starting with 79abdf873765c2bac4b6be09b3097a159efa4814c1dd0b60e7c529c776c0bbbe not found: ID does not exist" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.724481 4893 scope.go:117] "RemoveContainer" containerID="f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc" Jan 21 07:22:21 crc kubenswrapper[4893]: E0121 07:22:21.727073 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc\": container with ID starting with f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc not found: ID does not exist" containerID="f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.727151 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc"} err="failed to get container status \"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc\": rpc error: code = NotFound desc = could not find container \"f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc\": container with ID starting with f8a33c05f22bbd2bc37308e46ea47eb6b47322784149129d4e7b15436d0fd3cc not found: ID does not exist" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.754538 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761188 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckdzg\" (UniqueName: \"kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761288 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761321 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761388 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v72d7\" (UniqueName: \"kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7\") pod \"4c20f882-3bde-49a2-857e-207fe47d5aae\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761492 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761519 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom\") pod \"4c20f882-3bde-49a2-857e-207fe47d5aae\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761563 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle\") pod \"4c20f882-3bde-49a2-857e-207fe47d5aae\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761588 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data\") pod \"4c20f882-3bde-49a2-857e-207fe47d5aae\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761606 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs\") pod \"4c20f882-3bde-49a2-857e-207fe47d5aae\" (UID: \"4c20f882-3bde-49a2-857e-207fe47d5aae\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761692 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.761754 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs\") pod \"133bbed0-7073-43ad-881b-893cf8529bb2\" (UID: \"133bbed0-7073-43ad-881b-893cf8529bb2\") " Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.764747 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.764767 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwc8v\" (UniqueName: \"kubernetes.io/projected/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-kube-api-access-kwc8v\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.766194 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs" (OuterVolumeSpecName: "logs") pod "4c20f882-3bde-49a2-857e-207fe47d5aae" (UID: "4c20f882-3bde-49a2-857e-207fe47d5aae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.781063 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg" (OuterVolumeSpecName: "kube-api-access-ckdzg") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "kube-api-access-ckdzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.789009 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4c20f882-3bde-49a2-857e-207fe47d5aae" (UID: "4c20f882-3bde-49a2-857e-207fe47d5aae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.789229 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.796905 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:21 crc kubenswrapper[4893]: I0121 07:22:21.797542 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7" (OuterVolumeSpecName: "kube-api-access-v72d7") pod "4c20f882-3bde-49a2-857e-207fe47d5aae" (UID: "4c20f882-3bde-49a2-857e-207fe47d5aae"). InnerVolumeSpecName "kube-api-access-v72d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.119306 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c20f882-3bde-49a2-857e-207fe47d5aae" (UID: "4c20f882-3bde-49a2-857e-207fe47d5aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125053 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data" (OuterVolumeSpecName: "config-data") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125076 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs\") pod \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125119 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8pmv\" (UniqueName: \"kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv\") pod \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125146 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xw5r\" (UniqueName: \"kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r\") pod \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125243 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle\") pod \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125268 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data\") pod \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125290 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts\") pod \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\" (UID: \"d8ccfdb6-50d2-4718-a4d4-20366e02f93f\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125391 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6df\" (UniqueName: \"kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df\") pod \"fa419127-b439-47a0-9b9c-535529d4f7d9\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125454 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") pod \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\" (UID: \"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125505 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom\") pod \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\" (UID: \"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.125593 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts\") pod \"fa419127-b439-47a0-9b9c-535529d4f7d9\" (UID: \"fa419127-b439-47a0-9b9c-535529d4f7d9\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126067 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs" (OuterVolumeSpecName: "logs") pod "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" (UID: "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126169 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v72d7\" (UniqueName: \"kubernetes.io/projected/4c20f882-3bde-49a2-857e-207fe47d5aae-kube-api-access-v72d7\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126216 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126225 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126234 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c20f882-3bde-49a2-857e-207fe47d5aae-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126244 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckdzg\" (UniqueName: \"kubernetes.io/projected/133bbed0-7073-43ad-881b-893cf8529bb2-kube-api-access-ckdzg\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126253 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: W0121 07:22:22.126320 4893 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6/volumes/kubernetes.io~secret/config-data Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.126334 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data" (OuterVolumeSpecName: "config-data") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.129743 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa419127-b439-47a0-9b9c-535529d4f7d9" (UID: "fa419127-b439-47a0-9b9c-535529d4f7d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.130293 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8ccfdb6-50d2-4718-a4d4-20366e02f93f" (UID: "d8ccfdb6-50d2-4718-a4d4-20366e02f93f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.154450 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" (UID: "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.155838 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv" (OuterVolumeSpecName: "kube-api-access-v8pmv") pod "d8ccfdb6-50d2-4718-a4d4-20366e02f93f" (UID: "d8ccfdb6-50d2-4718-a4d4-20366e02f93f"). InnerVolumeSpecName "kube-api-access-v8pmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.193435 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": read tcp 10.217.0.2:36824->10.217.0.167:8776: read: connection reset by peer" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.200869 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df" (OuterVolumeSpecName: "kube-api-access-jl6df") pod "fa419127-b439-47a0-9b9c-535529d4f7d9" (UID: "fa419127-b439-47a0-9b9c-535529d4f7d9"). InnerVolumeSpecName "kube-api-access-jl6df". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.207454 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r" (OuterVolumeSpecName: "kube-api-access-5xw5r") pod "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" (UID: "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1"). InnerVolumeSpecName "kube-api-access-5xw5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.228170 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wljg\" (UniqueName: \"kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg\") pod \"cac277e8-b27c-4412-b25e-0c988c2e5555\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.228349 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts\") pod \"cac277e8-b27c-4412-b25e-0c988c2e5555\" (UID: \"cac277e8-b27c-4412-b25e-0c988c2e5555\") " Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.228968 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl6df\" (UniqueName: \"kubernetes.io/projected/fa419127-b439-47a0-9b9c-535529d4f7d9-kube-api-access-jl6df\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.228985 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.228997 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229008 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa419127-b439-47a0-9b9c-535529d4f7d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229023 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229035 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8pmv\" (UniqueName: \"kubernetes.io/projected/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-kube-api-access-v8pmv\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229045 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xw5r\" (UniqueName: \"kubernetes.io/projected/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-kube-api-access-5xw5r\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229057 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ccfdb6-50d2-4718-a4d4-20366e02f93f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229245 4893 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229262 4893 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229272 4893 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229292 4893 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5795cc4cb5-6bsp7: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.229295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cac277e8-b27c-4412-b25e-0c988c2e5555" (UID: "cac277e8-b27c-4412-b25e-0c988c2e5555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229362 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:26.229342197 +0000 UTC m=+1687.459688169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229745 4893 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.229771 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data podName:2caca0fd-0f3f-4725-a196-04463abed671 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:26.229763649 +0000 UTC m=+1687.460109551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data") pod "swift-proxy-5795cc4cb5-6bsp7" (UID: "2caca0fd-0f3f-4725-a196-04463abed671") : secret "swift-proxy-config-data" not found Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.258883 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg" (OuterVolumeSpecName: "kube-api-access-8wljg") pod "cac277e8-b27c-4412-b25e-0c988c2e5555" (UID: "cac277e8-b27c-4412-b25e-0c988c2e5555"). InnerVolumeSpecName "kube-api-access-8wljg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.276518 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.283937 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" event={"ID":"96d1b606-34ad-4e36-ad61-6db3b4a7c3e1","Type":"ContainerDied","Data":"0f26655ad28569c69d734395f473a8d64f482102fd227b20cad19f7e37359bfc"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.283986 4893 scope.go:117] "RemoveContainer" containerID="186a1d1fbbe587858b7d65a3e3d819601c2b15e5f6afb9d61e13a1623b7c2cf4" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.284130 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84df8fdfdb-8dxsk" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.289149 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" event={"ID":"4c20f882-3bde-49a2-857e-207fe47d5aae","Type":"ContainerDied","Data":"a6c319ab4ff855d9c777138b1fbf10d585bf21ca01fc7489aaf2806ec4fae1c3"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.289466 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bd56d5cbf-gkdlb" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.296479 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.296611 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6","Type":"ContainerDied","Data":"23b340696cbcd0f8c8081aa9b941420d9031aee23f3e1a7cab95827eb24b881b"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.299764 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-577cb64ffc-m6fkr" event={"ID":"133bbed0-7073-43ad-881b-893cf8529bb2","Type":"ContainerDied","Data":"814ccd67a132cd7486827aa51371d7f8e51e92e723bbe7051d74157c6669b4a8"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.299859 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-577cb64ffc-m6fkr" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.302830 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d3cd-account-create-update-7l8fv" event={"ID":"cac277e8-b27c-4412-b25e-0c988c2e5555","Type":"ContainerDied","Data":"74bc98aeee7ea889f856ffaa6455514c991316622c27dfb2061a76ac1456fd15"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.303038 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d3cd-account-create-update-7l8fv" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.311961 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5653-account-create-update-ktlq8" event={"ID":"fa419127-b439-47a0-9b9c-535529d4f7d9","Type":"ContainerDied","Data":"23ce942b19d45c58c6bfdd048352e07362b6dfb026ef1c8d24ee462c695d997a"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.312006 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5653-account-create-update-ktlq8" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.324128 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-eb0d-account-create-update-skmxs" event={"ID":"d8ccfdb6-50d2-4718-a4d4-20366e02f93f","Type":"ContainerDied","Data":"d8d58e67a67c9230ae61cf46fd32cddba5786e837cb8745d0c8f32acbaa89cf2"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.324187 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-eb0d-account-create-update-skmxs" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.330940 4893 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.330969 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wljg\" (UniqueName: \"kubernetes.io/projected/cac277e8-b27c-4412-b25e-0c988c2e5555-kube-api-access-8wljg\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.330979 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac277e8-b27c-4412-b25e-0c988c2e5555-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.334851 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" event={"ID":"187a5c9a-e642-4826-8543-f53fd0789757","Type":"ContainerStarted","Data":"f1727d3dceafb15809e9cd2a8bdc7b267e08952db1a7092425588fb58ef1c698"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.356735 4893 generic.go:334] "Generic (PLEG): container finished" podID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerID="7c6d4673c3549715ec53ab38c378a4c139ad12463137e1030d564c833b09d3f2" exitCode=0 Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.356819 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerDied","Data":"7c6d4673c3549715ec53ab38c378a4c139ad12463137e1030d564c833b09d3f2"} Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.366428 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.412228 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" (UID: "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.412701 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.436872 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.436908 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.436918 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.455475 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.455817 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: connect: connection refused" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.467383 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.480436 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.532943 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config" (OuterVolumeSpecName: "config") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.535749 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data" (OuterVolumeSpecName: "config-data") pod "4c20f882-3bde-49a2-857e-207fe47d5aae" (UID: "4c20f882-3bde-49a2-857e-207fe47d5aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.539752 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.539782 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.539796 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c20f882-3bde-49a2-857e-207fe47d5aae-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.541803 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.586891 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "133bbed0-7073-43ad-881b-893cf8529bb2" (UID: "133bbed0-7073-43ad-881b-893cf8529bb2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.594987 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data" (OuterVolumeSpecName: "config-data") pod "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" (UID: "96d1b606-34ad-4e36-ad61-6db3b4a7c3e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.595398 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" (UID: "f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.645758 4893 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.645795 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.645818 4893 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/133bbed0-7073-43ad-881b-893cf8529bb2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.667003 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.667454 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="sg-core" containerID="cri-o://dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.667542 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-notification-agent" containerID="cri-o://3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.667626 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="proxy-httpd" containerID="cri-o://3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.669354 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-central-agent" containerID="cri-o://ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.718558 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.719292 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.719557 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" containerName="kube-state-metrics" containerID="cri-o://ae932dd21754883ff82584e62cef856bfa6cbc6aee915c47053feb942b516a54" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.732267 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.736844 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.736929 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerName="nova-cell1-conductor-conductor" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.832077 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d2ca-account-create-update-2f7t4"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.898065 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.898526 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" containerName="memcached" containerID="cri-o://903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e" gracePeriod=30 Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.939173 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d2ca-account-create-update-2f7t4"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.948206 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:48026->10.217.0.206:8775: read: connection reset by peer" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.948246 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:48018->10.217.0.206:8775: read: connection reset by peer" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.957618 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vjrdh"] Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.968246 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.970342 4893 scope.go:117] "RemoveContainer" containerID="911526d6926efcd3de4bef0ee4d5862491c677b9a9b4639aa131893753ece29e" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.970502 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d2ca-account-create-update-wvmph"] Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.970978 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-httpd" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.970995 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-httpd" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971007 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971014 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971021 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker-log" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971027 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker-log" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971035 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-api" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971040 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-api" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971049 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971055 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971062 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971068 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener" Jan 21 07:22:22 crc kubenswrapper[4893]: E0121 07:22:22.971096 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener-log" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971102 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener-log" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971308 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener-log" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971339 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" containerName="barbican-keystone-listener" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971349 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-httpd" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971363 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" containerName="neutron-api" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971374 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971384 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.971398 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" containerName="barbican-worker-log" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.983511 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:22 crc kubenswrapper[4893]: I0121 07:22:22.987374 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.013727 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vjrdh"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.031509 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d2ca-account-create-update-wvmph"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.035880 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-d9tjm"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.043896 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-d9tjm"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.051987 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.061284 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.061559 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-9fd9c4957-2lblr" podUID="1cfa1d66-684f-43de-b751-1da2399d48ee" containerName="keystone-api" containerID="cri-o://fb79006d33020516a4f0e2561b74cb58a9f9a5735dfedb4b98b82f935997165d" gracePeriod=30 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.066003 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljn47\" (UniqueName: \"kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.066089 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.067445 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.075038 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lkn5t"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.095642 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.099363 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.106988 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lkn5t"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.123959 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.128817 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d2ca-account-create-update-wvmph"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.140387 4893 scope.go:117] "RemoveContainer" containerID="87c38972a6e91adfc22b0f243c62624ce591c7a3c511e5aad78412c1db488300" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.153493 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.167736 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nthqg\" (UniqueName: \"kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg\") pod \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.168073 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts\") pod \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\" (UID: \"1236d7dc-6a98-4d59-8a88-f3101bd017ef\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.168366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljn47\" (UniqueName: \"kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.168464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.168804 4893 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.168975 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:23.668958294 +0000 UTC m=+1684.899304186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : configmap "openstack-scripts" not found Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.170604 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1236d7dc-6a98-4d59-8a88-f3101bd017ef" (UID: "1236d7dc-6a98-4d59-8a88-f3101bd017ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.174879 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.178839 4893 projected.go:194] Error preparing data for projected volume kube-api-access-ljn47 for pod openstack/keystone-d2ca-account-create-update-wvmph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.178905 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47 podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:23.67888713 +0000 UTC m=+1684.909233032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ljn47" (UniqueName: "kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.194773 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6bd56d5cbf-gkdlb"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.204995 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.208928 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-eb0d-account-create-update-skmxs"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.213211 4893 scope.go:117] "RemoveContainer" containerID="c88d130cc82c49bf6ae1c611cdbaa9e2ce62ffa7e1d23413d4010afe63beedd5" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.213379 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg" (OuterVolumeSpecName: "kube-api-access-nthqg") pod "1236d7dc-6a98-4d59-8a88-f3101bd017ef" (UID: "1236d7dc-6a98-4d59-8a88-f3101bd017ef"). InnerVolumeSpecName "kube-api-access-nthqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.214246 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ljn47 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-d2ca-account-create-update-wvmph" podUID="aff46ce4-9e0d-4805-98fe-52b60b607877" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.218389 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.225699 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.231427 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d3cd-account-create-update-7l8fv"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.241923 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.249793 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-84df8fdfdb-8dxsk"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.260080 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.266198 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-577cb64ffc-m6fkr"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.269942 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4bcx\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270020 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts\") pod \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270049 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270091 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270113 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270161 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tstqw\" (UniqueName: \"kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw\") pod \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\" (UID: \"0622bcb2-e8ab-4e4b-a33f-64e48320b232\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270200 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdnhx\" (UniqueName: \"kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270250 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270290 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270340 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270366 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270407 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wqv8\" (UniqueName: \"kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8\") pod \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270438 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270485 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270505 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270560 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270587 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270615 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5b37865c-22cd-4288-b47b-ef9ef1f33646\" (UID: \"5b37865c-22cd-4288-b47b-ef9ef1f33646\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270650 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts\") pod \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\" (UID: \"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.270691 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd\") pod \"2caca0fd-0f3f-4725-a196-04463abed671\" (UID: \"2caca0fd-0f3f-4725-a196-04463abed671\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.271155 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0622bcb2-e8ab-4e4b-a33f-64e48320b232" (UID: "0622bcb2-e8ab-4e4b-a33f-64e48320b232"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.271893 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.272384 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.272547 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.273206 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.281139 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8" (OuterVolumeSpecName: "kube-api-access-2wqv8") pod "bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc" (UID: "bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc"). InnerVolumeSpecName "kube-api-access-2wqv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.281617 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.283195 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx" (OuterVolumeSpecName: "kube-api-access-r4bcx") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "kube-api-access-r4bcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.298862 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw" (OuterVolumeSpecName: "kube-api-access-tstqw") pod "0622bcb2-e8ab-4e4b-a33f-64e48320b232" (UID: "0622bcb2-e8ab-4e4b-a33f-64e48320b232"). InnerVolumeSpecName "kube-api-access-tstqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.300519 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc" (UID: "bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.300732 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.301483 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1236d7dc-6a98-4d59-8a88-f3101bd017ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.301504 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nthqg\" (UniqueName: \"kubernetes.io/projected/1236d7dc-6a98-4d59-8a88-f3101bd017ef-kube-api-access-nthqg\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.301535 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.308700 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.309872 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.315686 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx" (OuterVolumeSpecName: "kube-api-access-zdnhx") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "kube-api-access-zdnhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.325994 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.330148 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.331366 4893 scope.go:117] "RemoveContainer" containerID="b004bcda62aef9e8aee81239d327f2808f42d03c6caacf5809d4f355361f7480" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.337771 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5653-account-create-update-ktlq8"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.349960 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.363537 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.387290 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" event={"ID":"2caca0fd-0f3f-4725-a196-04463abed671","Type":"ContainerDied","Data":"80f8d8c3593fc890ffc40a914ae8ca1adfd69137113a1ca85ee6741d35d70488"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.387411 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5795cc4cb5-6bsp7" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.398853 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2981-account-create-update-v76jn" event={"ID":"1236d7dc-6a98-4d59-8a88-f3101bd017ef","Type":"ContainerDied","Data":"d392526f25412cb53a8009265132184a79b45539fd71b3ab1acbddd77804eed3"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.399005 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.406886 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="galera" containerID="cri-o://aee8a6ea9a77f904909aaaa7e5b406eb695daf2df6664ab2f71b0577e981db2c" gracePeriod=30 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537395 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537466 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537494 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcgb\" (UniqueName: \"kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537657 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537729 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537760 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.537859 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data\") pod \"d547505a-34d0-4645-9690-74df58728a46\" (UID: \"d547505a-34d0-4645-9690-74df58728a46\") " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538433 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdnhx\" (UniqueName: \"kubernetes.io/projected/5b37865c-22cd-4288-b47b-ef9ef1f33646-kube-api-access-zdnhx\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538450 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538459 4893 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538470 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wqv8\" (UniqueName: \"kubernetes.io/projected/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-kube-api-access-2wqv8\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538478 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538487 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538495 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538503 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538524 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538533 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538553 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2caca0fd-0f3f-4725-a196-04463abed671-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538563 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4bcx\" (UniqueName: \"kubernetes.io/projected/2caca0fd-0f3f-4725-a196-04463abed671-kube-api-access-r4bcx\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538572 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0622bcb2-e8ab-4e4b-a33f-64e48320b232-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538580 4893 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b37865c-22cd-4288-b47b-ef9ef1f33646-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538588 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b37865c-22cd-4288-b47b-ef9ef1f33646-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.538598 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tstqw\" (UniqueName: \"kubernetes.io/projected/0622bcb2-e8ab-4e4b-a33f-64e48320b232-kube-api-access-tstqw\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.549455 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs" (OuterVolumeSpecName: "logs") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.561322 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts" (OuterVolumeSpecName: "scripts") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.562427 4893 generic.go:334] "Generic (PLEG): container finished" podID="63916786-c676-4695-84a1-3d3be685de16" containerID="2ed56fea6ed96fd765f43737ab0141951ab632e2d98acd1cb85189751d716818" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.562517 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerDied","Data":"2ed56fea6ed96fd765f43737ab0141951ab632e2d98acd1cb85189751d716818"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.594420 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.594820 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.595941 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.604156 4893 generic.go:334] "Generic (PLEG): container finished" podID="d547505a-34d0-4645-9690-74df58728a46" containerID="e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.604252 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54745b6874-xnbrr" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.619458 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22df-account-create-update-mpgzf" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.631074 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13231972-103e-4970-845c-5aba8c59d68f" path="/var/lib/kubelet/pods/13231972-103e-4970-845c-5aba8c59d68f/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.631637 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133bbed0-7073-43ad-881b-893cf8529bb2" path="/var/lib/kubelet/pods/133bbed0-7073-43ad-881b-893cf8529bb2/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.633590 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba738c9-29c6-492d-acdc-0854042df9dc" path="/var/lib/kubelet/pods/3ba738c9-29c6-492d-acdc-0854042df9dc/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.634284 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c20f882-3bde-49a2-857e-207fe47d5aae" path="/var/lib/kubelet/pods/4c20f882-3bde-49a2-857e-207fe47d5aae/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.635060 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d0c87f-f0de-4f54-bae1-7af24c5c7f38" path="/var/lib/kubelet/pods/94d0c87f-f0de-4f54-bae1-7af24c5c7f38/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.635117 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j8ttn" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.645729 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb" (OuterVolumeSpecName: "kube-api-access-qzcgb") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "kube-api-access-qzcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.646186 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d1b606-34ad-4e36-ad61-6db3b4a7c3e1" path="/var/lib/kubelet/pods/96d1b606-34ad-4e36-ad61-6db3b4a7c3e1/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.647758 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b22b42-2fce-4972-a651-1b49ef7b008c" path="/var/lib/kubelet/pods/b4b22b42-2fce-4972-a651-1b49ef7b008c/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.648643 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac277e8-b27c-4412-b25e-0c988c2e5555" path="/var/lib/kubelet/pods/cac277e8-b27c-4412-b25e-0c988c2e5555/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.649384 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ccfdb6-50d2-4718-a4d4-20366e02f93f" path="/var/lib/kubelet/pods/d8ccfdb6-50d2-4718-a4d4-20366e02f93f/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.650576 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6" path="/var/lib/kubelet/pods/f4a1a3b4-4137-4a6e-b8d3-20518f38a2d6/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.650590 4893 generic.go:334] "Generic (PLEG): container finished" podID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerID="7f7d2aeb9b4cbaf2e08372f0fc88c8fdf81814a1c30309f7310a68b860cbf2b7" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.651267 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ecaeda-4211-4680-b408-cf7e4717d723" path="/var/lib/kubelet/pods/f4ecaeda-4211-4680-b408-cf7e4717d723/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.652878 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa419127-b439-47a0-9b9c-535529d4f7d9" path="/var/lib/kubelet/pods/fa419127-b439-47a0-9b9c-535529d4f7d9/volumes" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.656085 4893 scope.go:117] "RemoveContainer" containerID="07919e653d69657ea7b011e6891aec998b0e961f74741efc99381bb2776ca73d" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.656915 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d547505a-34d0-4645-9690-74df58728a46-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.656955 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.656972 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzcgb\" (UniqueName: \"kubernetes.io/projected/d547505a-34d0-4645-9690-74df58728a46-kube-api-access-qzcgb\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.669730 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" containerID="ae932dd21754883ff82584e62cef856bfa6cbc6aee915c47053feb942b516a54" exitCode=2 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.677267 4893 generic.go:334] "Generic (PLEG): container finished" podID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerID="31c581f13004a2f7815d44365eb034baed7c66ac483f7fa7c22317077d696c9a" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.677998 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "5b37865c-22cd-4288-b47b-ef9ef1f33646" (UID: "5b37865c-22cd-4288-b47b-ef9ef1f33646"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.692871 4893 generic.go:334] "Generic (PLEG): container finished" podID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerID="357413c2169766654a3f84ffb51b7dca2610fa69e2e67bc3239b3491d881ff66" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.699038 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.712642 4893 generic.go:334] "Generic (PLEG): container finished" podID="f891af55-ec46-4261-9f5e-01a1c181f194" containerID="3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.712742 4893 generic.go:334] "Generic (PLEG): container finished" podID="f891af55-ec46-4261-9f5e-01a1c181f194" containerID="dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60" exitCode=2 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.724661 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data" (OuterVolumeSpecName: "config-data") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.728018 4893 generic.go:334] "Generic (PLEG): container finished" podID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerID="fb8af694018c30b6b38db1c567cc9a482101811cee291371c4cbd5248400b963" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.729355 4893 generic.go:334] "Generic (PLEG): container finished" podID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerID="805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.730712 4893 generic.go:334] "Generic (PLEG): container finished" podID="45545422-414a-433a-9de9-fbfb6e03add3" containerID="9af6af2cf0b6fc56ff8fff6040414d4c6371bd930a27e4d908e26718f4910e2e" exitCode=0 Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.732857 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.738619 4893 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 07:22:23 crc kubenswrapper[4893]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: if [ -n "" ]; then Jan 21 07:22:23 crc kubenswrapper[4893]: GRANT_DATABASE="" Jan 21 07:22:23 crc kubenswrapper[4893]: else Jan 21 07:22:23 crc kubenswrapper[4893]: GRANT_DATABASE="*" Jan 21 07:22:23 crc kubenswrapper[4893]: fi Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: # going for maximum compatibility here: Jan 21 07:22:23 crc kubenswrapper[4893]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 07:22:23 crc kubenswrapper[4893]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 07:22:23 crc kubenswrapper[4893]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 07:22:23 crc kubenswrapper[4893]: # support updates Jan 21 07:22:23 crc kubenswrapper[4893]: Jan 21 07:22:23 crc kubenswrapper[4893]: $MYSQL_CMD < logger="UnhandledError" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.740513 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-pvg9n" podUID="f6a58a1a-1345-46f4-bb93-7b748440724a" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.753074 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2caca0fd-0f3f-4725-a196-04463abed671" (UID: "2caca0fd-0f3f-4725-a196-04463abed671"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.760464 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljn47\" (UniqueName: \"kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.760525 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.761324 4893 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.761422 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:24.761399388 +0000 UTC m=+1685.991745380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : configmap "openstack-scripts" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.761825 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.761872 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data podName:fdb40d40-7926-424a-810d-3b6f77e1022f nodeName:}" failed. No retries permitted until 2026-01-21 07:22:27.761858731 +0000 UTC m=+1688.992204743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data") pod "rabbitmq-cell1-server-0" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.764256 4893 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b37865c-22cd-4288-b47b-ef9ef1f33646-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.764456 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.764522 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.764537 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caca0fd-0f3f-4725-a196-04463abed671-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.765955 4893 projected.go:194] Error preparing data for projected volume kube-api-access-ljn47 for pod openstack/keystone-d2ca-account-create-update-wvmph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:23 crc kubenswrapper[4893]: E0121 07:22:23.766074 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47 podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:24.766030491 +0000 UTC m=+1685.996376393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ljn47" (UniqueName: "kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.769943 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.837540 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.866618 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.866649 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.867828 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data" (OuterVolumeSpecName: "config-data") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.906301 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.920516 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d547505a-34d0-4645-9690-74df58728a46" (UID: "d547505a-34d0-4645-9690-74df58728a46"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.941660 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b37865c-22cd-4288-b47b-ef9ef1f33646","Type":"ContainerDied","Data":"f6fb7f89b3c38c4706ce0998db7d9049a3566edf6e0b988b061138a8bc4f6cdf"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943475 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerDied","Data":"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943510 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943531 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54745b6874-xnbrr" event={"ID":"d547505a-34d0-4645-9690-74df58728a46","Type":"ContainerDied","Data":"a1846137ad41c4b0f9789c5b2681ad4dab98b9bdee0b11772723bba9628f3821"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943545 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22df-account-create-update-mpgzf" event={"ID":"0622bcb2-e8ab-4e4b-a33f-64e48320b232","Type":"ContainerDied","Data":"f870a2b7fc095515880f0a13f40cb9fe28e027b6800fece2efd4e24b992d398c"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943564 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j8ttn" event={"ID":"bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc","Type":"ContainerDied","Data":"eaea2786effedd164b24ef4b54289baff79c253ef8e82274c60d7012734701fc"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerDied","Data":"7f7d2aeb9b4cbaf2e08372f0fc88c8fdf81814a1c30309f7310a68b860cbf2b7"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943596 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a","Type":"ContainerDied","Data":"ae932dd21754883ff82584e62cef856bfa6cbc6aee915c47053feb942b516a54"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943618 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerDied","Data":"31c581f13004a2f7815d44365eb034baed7c66ac483f7fa7c22317077d696c9a"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerDied","Data":"357413c2169766654a3f84ffb51b7dca2610fa69e2e67bc3239b3491d881ff66"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943652 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerDied","Data":"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943685 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerDied","Data":"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943701 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerDied","Data":"fb8af694018c30b6b38db1c567cc9a482101811cee291371c4cbd5248400b963"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943718 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33","Type":"ContainerDied","Data":"805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.943734 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerDied","Data":"9af6af2cf0b6fc56ff8fff6040414d4c6371bd930a27e4d908e26718f4910e2e"} Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.957987 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.958892 4893 scope.go:117] "RemoveContainer" containerID="2c2e4963838a51923436692bec77d73dc438b926f3c5c0edc268cb6c72480f66" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.968704 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.968730 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:23 crc kubenswrapper[4893]: I0121 07:22:23.968739 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d547505a-34d0-4645-9690-74df58728a46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.017966 4893 scope.go:117] "RemoveContainer" containerID="ebf33f7d57690c2e8c7fe0620ba29bb8deb01fa50964fb6ef7ca8c919172e1bf" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.018327 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.054790 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.055788 4893 scope.go:117] "RemoveContainer" containerID="18e9d45b37e8d84945f0132ccb26b8b828ad2ef4ebd71d0f862ce04dc0922db6" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.059169 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.059529 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.060749 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070236 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070283 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070330 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070352 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070385 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070430 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070468 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070500 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.070605 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4qx\" (UniqueName: \"kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx\") pod \"41fc2d9b-17e4-42b0-bcee-065a237b513c\" (UID: \"41fc2d9b-17e4-42b0-bcee-065a237b513c\") " Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.071499 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.071583 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.072583 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs" (OuterVolumeSpecName: "logs") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.072644 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.082015 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j8ttn"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.089920 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx" (OuterVolumeSpecName: "kube-api-access-gg4qx") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "kube-api-access-gg4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.097193 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts" (OuterVolumeSpecName: "scripts") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.098430 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.103485 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.108155 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.108173 4893 scope.go:117] "RemoveContainer" containerID="7c6d4673c3549715ec53ab38c378a4c139ad12463137e1030d564c833b09d3f2" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.110918 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-22df-account-create-update-mpgzf"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.173229 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.173383 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.173436 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts\") pod \"187a5c9a-e642-4826-8543-f53fd0789757\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.173949 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "187a5c9a-e642-4826-8543-f53fd0789757" (UID: "187a5c9a-e642-4826-8543-f53fd0789757"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174183 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gddtf\" (UniqueName: \"kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174257 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174290 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174317 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174360 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs\") pod \"4b445f12-f3bf-41d9-91f9-56def2b2694b\" (UID: \"4b445f12-f3bf-41d9-91f9-56def2b2694b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.174432 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9h9\" (UniqueName: \"kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9\") pod \"187a5c9a-e642-4826-8543-f53fd0789757\" (UID: \"187a5c9a-e642-4826-8543-f53fd0789757\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175157 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs" (OuterVolumeSpecName: "logs") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175783 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175810 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175823 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/187a5c9a-e642-4826-8543-f53fd0789757-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175835 4893 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fc2d9b-17e4-42b0-bcee-065a237b513c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175846 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b445f12-f3bf-41d9-91f9-56def2b2694b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175860 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fc2d9b-17e4-42b0-bcee-065a237b513c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175873 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.175884 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4qx\" (UniqueName: \"kubernetes.io/projected/41fc2d9b-17e4-42b0-bcee-065a237b513c-kube-api-access-gg4qx\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.179726 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.180695 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf" (OuterVolumeSpecName: "kube-api-access-gddtf") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "kube-api-access-gddtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.181900 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9" (OuterVolumeSpecName: "kube-api-access-sg9h9") pod "187a5c9a-e642-4826-8543-f53fd0789757" (UID: "187a5c9a-e642-4826-8543-f53fd0789757"). InnerVolumeSpecName "kube-api-access-sg9h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.184502 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.189695 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5795cc4cb5-6bsp7"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.196875 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.222079 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.226993 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.230888 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.234735 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.244390 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.245161 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.257618 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data" (OuterVolumeSpecName: "config-data") pod "41fc2d9b-17e4-42b0-bcee-065a237b513c" (UID: "41fc2d9b-17e4-42b0-bcee-065a237b513c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283262 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gddtf\" (UniqueName: \"kubernetes.io/projected/4b445f12-f3bf-41d9-91f9-56def2b2694b-kube-api-access-gddtf\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283309 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283321 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283329 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283337 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283345 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fc2d9b-17e4-42b0-bcee-065a237b513c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283354 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9h9\" (UniqueName: \"kubernetes.io/projected/187a5c9a-e642-4826-8543-f53fd0789757-kube-api-access-sg9h9\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.283455 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.284460 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.287236 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.289458 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data" (OuterVolumeSpecName: "config-data") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.306885 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b445f12-f3bf-41d9-91f9-56def2b2694b" (UID: "4b445f12-f3bf-41d9-91f9-56def2b2694b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.306958 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.315019 4893 scope.go:117] "RemoveContainer" containerID="e8699a5e2783d56129ce6db61a403cae9a45f49d20cbf1c4d665f290331a8241" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.315434 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.322062 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-54745b6874-xnbrr"] Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.357981 4893 scope.go:117] "RemoveContainer" containerID="e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.384863 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.384911 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.384931 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle\") pod \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.384950 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.384971 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs\") pod \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385005 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385033 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385050 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74wds\" (UniqueName: \"kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds\") pod \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8wc9\" (UniqueName: \"kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385117 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385165 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385196 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385223 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6vhr\" (UniqueName: \"kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385255 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67q4g\" (UniqueName: \"kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g\") pod \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385270 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle\") pod \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385307 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385331 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385352 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385377 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs\") pod \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385415 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385448 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385485 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data\") pod \"45545422-414a-433a-9de9-fbfb6e03add3\" (UID: \"45545422-414a-433a-9de9-fbfb6e03add3\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385518 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle\") pod \"740cac4e-ecd7-4752-9d29-4adb1a14577b\" (UID: \"740cac4e-ecd7-4752-9d29-4adb1a14577b\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385536 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385562 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385556 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs" (OuterVolumeSpecName: "logs") pod "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" (UID: "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385589 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hpx8\" (UniqueName: \"kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385622 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data\") pod \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\" (UID: \"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385637 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data\") pod \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\" (UID: \"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.385660 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts\") pod \"63916786-c676-4695-84a1-3d3be685de16\" (UID: \"63916786-c676-4695-84a1-3d3be685de16\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386690 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386907 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386924 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386935 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386948 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.386959 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b445f12-f3bf-41d9-91f9-56def2b2694b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.390465 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts" (OuterVolumeSpecName: "scripts") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.391852 4893 scope.go:117] "RemoveContainer" containerID="b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.392581 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.392755 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs" (OuterVolumeSpecName: "logs") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.392996 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs" (OuterVolumeSpecName: "logs") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.395511 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8" (OuterVolumeSpecName: "kube-api-access-9hpx8") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "kube-api-access-9hpx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.396060 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds" (OuterVolumeSpecName: "kube-api-access-74wds") pod "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" (UID: "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6"). InnerVolumeSpecName "kube-api-access-74wds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.402602 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.402602 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs" (OuterVolumeSpecName: "logs") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.407908 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr" (OuterVolumeSpecName: "kube-api-access-k6vhr") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "kube-api-access-k6vhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.423722 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts" (OuterVolumeSpecName: "scripts") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.423872 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9" (OuterVolumeSpecName: "kube-api-access-f8wc9") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "kube-api-access-f8wc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.423751 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.433058 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g" (OuterVolumeSpecName: "kube-api-access-67q4g") pod "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" (UID: "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33"). InnerVolumeSpecName "kube-api-access-67q4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.436831 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.448933 4893 scope.go:117] "RemoveContainer" containerID="e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6" Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.450836 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6\": container with ID starting with e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6 not found: ID does not exist" containerID="e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.450891 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6"} err="failed to get container status \"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6\": rpc error: code = NotFound desc = could not find container \"e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6\": container with ID starting with e229a469d2a8cf4280c6427a369a9b0a149d127bff46b75596626d17591050a6 not found: ID does not exist" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.450930 4893 scope.go:117] "RemoveContainer" containerID="b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58" Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.451397 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58\": container with ID starting with b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58 not found: ID does not exist" containerID="b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.451435 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58"} err="failed to get container status \"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58\": rpc error: code = NotFound desc = could not find container \"b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58\": container with ID starting with b6818068f64d6e4a2339a2f96e8276b4f2df53df750ab887db8f0527fe791e58 not found: ID does not exist" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.464509 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data" (OuterVolumeSpecName: "config-data") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.496230 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" (UID: "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.497277 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config\") pod \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.497353 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle\") pod \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.497540 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxrt\" (UniqueName: \"kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt\") pod \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.497654 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs\") pod \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\" (UID: \"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498733 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498758 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498772 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498802 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498818 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498830 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74wds\" (UniqueName: \"kubernetes.io/projected/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-kube-api-access-74wds\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498842 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8wc9\" (UniqueName: \"kubernetes.io/projected/45545422-414a-433a-9de9-fbfb6e03add3-kube-api-access-f8wc9\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498902 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498915 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/740cac4e-ecd7-4752-9d29-4adb1a14577b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498925 4893 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45545422-414a-433a-9de9-fbfb6e03add3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498973 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6vhr\" (UniqueName: \"kubernetes.io/projected/740cac4e-ecd7-4752-9d29-4adb1a14577b-kube-api-access-k6vhr\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498986 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67q4g\" (UniqueName: \"kubernetes.io/projected/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-kube-api-access-67q4g\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.498998 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.499008 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63916786-c676-4695-84a1-3d3be685de16-logs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.499034 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.499062 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hpx8\" (UniqueName: \"kubernetes.io/projected/63916786-c676-4695-84a1-3d3be685de16-kube-api-access-9hpx8\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.512266 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt" (OuterVolumeSpecName: "kube-api-access-rnxrt") pod "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" (UID: "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a"). InnerVolumeSpecName "kube-api-access-rnxrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.514378 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" (UID: "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.545031 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" (UID: "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.560960 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.573868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.576278 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data" (OuterVolumeSpecName: "config-data") pod "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" (UID: "d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.578897 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.579014 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.583688 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.583988 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" (UID: "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605367 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605635 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605694 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxrt\" (UniqueName: \"kubernetes.io/projected/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-api-access-rnxrt\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605709 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605722 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605734 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605746 4893 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605762 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605776 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605787 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605799 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.605810 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.619927 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" (UID: "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.625212 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data" (OuterVolumeSpecName: "config-data") pod "63916786-c676-4695-84a1-3d3be685de16" (UID: "63916786-c676-4695-84a1-3d3be685de16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.632575 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.647181 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data" (OuterVolumeSpecName: "config-data") pod "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" (UID: "f7722b5d-ba92-4332-93c7-bc3aa9bfdb33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.648862 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "740cac4e-ecd7-4752-9d29-4adb1a14577b" (UID: "740cac4e-ecd7-4752-9d29-4adb1a14577b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.674865 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" (UID: "1dd69159-4b4b-4b13-aaa2-7b9edf7c468a"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.682901 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data" (OuterVolumeSpecName: "config-data") pod "45545422-414a-433a-9de9-fbfb6e03add3" (UID: "45545422-414a-433a-9de9-fbfb6e03add3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.699256 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51 is running failed: container process not found" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.699546 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51 is running failed: container process not found" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.700041 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51 is running failed: container process not found" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.700104 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707056 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707084 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707094 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740cac4e-ecd7-4752-9d29-4adb1a14577b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707106 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63916786-c676-4695-84a1-3d3be685de16-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707115 4893 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707125 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45545422-414a-433a-9de9-fbfb6e03add3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.707135 4893 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.740904 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" event={"ID":"4b445f12-f3bf-41d9-91f9-56def2b2694b","Type":"ContainerDied","Data":"b437088a8c6a7a0a63a89a293630939664c63c2de3c2fe1a4391b85beb796b1c"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.740957 4893 scope.go:117] "RemoveContainer" containerID="7f7d2aeb9b4cbaf2e08372f0fc88c8fdf81814a1c30309f7310a68b860cbf2b7" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.741059 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7fc4c6bb88-6pfmp" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.750146 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7722b5d-ba92-4332-93c7-bc3aa9bfdb33","Type":"ContainerDied","Data":"0bf95a6fdcc0ae3f81f550cc775fec3fcee4e15a83cc525de4f80754cc16c083"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.750229 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.753317 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.753295 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d4ab-account-create-update-6nbg8" event={"ID":"187a5c9a-e642-4826-8543-f53fd0789757","Type":"ContainerDied","Data":"f1727d3dceafb15809e9cd2a8bdc7b267e08952db1a7092425588fb58ef1c698"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.759299 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pvg9n" event={"ID":"f6a58a1a-1345-46f4-bb93-7b748440724a","Type":"ContainerStarted","Data":"226351752bfb0f8672531fef671a9fed736a5468865ec4ffa5b69d1e2885c0d3"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.763650 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"63916786-c676-4695-84a1-3d3be685de16","Type":"ContainerDied","Data":"b36eeeb0948f38b40e09fa379e2cdbecc5a9f1128c7e16702611b396f1fd5337"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.763781 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.791728 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fc2d9b-17e4-42b0-bcee-065a237b513c","Type":"ContainerDied","Data":"09d9fe65a3c699386efeaee2ffed31652230db3bd9302407ca1b37af6576a719"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.792867 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.797448 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6","Type":"ContainerDied","Data":"c270105d5138cf1d34989ae06d55c397a17492ce79c47ca41c8b4386880d4996"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.797610 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.810472 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljn47\" (UniqueName: \"kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.810560 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts\") pod \"keystone-d2ca-account-create-update-wvmph\" (UID: \"aff46ce4-9e0d-4805-98fe-52b60b607877\") " pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.810927 4893 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.810998 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:26.810981143 +0000 UTC m=+1688.041327045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : configmap "openstack-scripts" not found Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.819112 4893 projected.go:194] Error preparing data for projected volume kube-api-access-ljn47 for pod openstack/keystone-d2ca-account-create-update-wvmph: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.820377 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.820129 4893 generic.go:334] "Generic (PLEG): container finished" podID="f891af55-ec46-4261-9f5e-01a1c181f194" containerID="ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d" exitCode=0 Jan 21 07:22:24 crc kubenswrapper[4893]: E0121 07:22:24.821437 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47 podName:aff46ce4-9e0d-4805-98fe-52b60b607877 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:26.820916851 +0000 UTC m=+1688.051262753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ljn47" (UniqueName: "kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47") pod "keystone-d2ca-account-create-update-wvmph" (UID: "aff46ce4-9e0d-4805-98fe-52b60b607877") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.821451 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerDied","Data":"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.826155 4893 generic.go:334] "Generic (PLEG): container finished" podID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" exitCode=0 Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.826281 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c8d3670-41c0-4649-8a2f-38b090638cac","Type":"ContainerDied","Data":"2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.826321 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c8d3670-41c0-4649-8a2f-38b090638cac","Type":"ContainerDied","Data":"26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.826345 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d1d605009c77e7c371db6aede1dd07fa093450374c37ac11b315edc0ce5473" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.832368 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.833296 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1dd69159-4b4b-4b13-aaa2-7b9edf7c468a","Type":"ContainerDied","Data":"463251b62b3d7b213feb1ef0dcb9d0aa66b72528b70d39aac0b56564c010df8f"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.840986 4893 scope.go:117] "RemoveContainer" containerID="af2cbd2416ff8e2a96ecf8094812868e567e247c82f334bac61e2985c9c7061b" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.845428 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"740cac4e-ecd7-4752-9d29-4adb1a14577b","Type":"ContainerDied","Data":"b611d728318d141b8d4e5de7bcb7f46174303f0ac1795abd1a6c6a1a4d220908"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.845617 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929040 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config\") pod \"520610a0-97e8-45ed-8020-952d9d4501b1\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929121 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle\") pod \"520610a0-97e8-45ed-8020-952d9d4501b1\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929158 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs\") pod \"520610a0-97e8-45ed-8020-952d9d4501b1\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929180 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68pbl\" (UniqueName: \"kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl\") pod \"520610a0-97e8-45ed-8020-952d9d4501b1\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929227 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data\") pod \"520610a0-97e8-45ed-8020-952d9d4501b1\" (UID: \"520610a0-97e8-45ed-8020-952d9d4501b1\") " Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929341 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45545422-414a-433a-9de9-fbfb6e03add3","Type":"ContainerDied","Data":"7534b26bb4c70a70d3098f80c9972d4edc0df1425f39e144f1dba1662c2f2182"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.929507 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.931101 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "520610a0-97e8-45ed-8020-952d9d4501b1" (UID: "520610a0-97e8-45ed-8020-952d9d4501b1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.932658 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data" (OuterVolumeSpecName: "config-data") pod "520610a0-97e8-45ed-8020-952d9d4501b1" (UID: "520610a0-97e8-45ed-8020-952d9d4501b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.938518 4893 generic.go:334] "Generic (PLEG): container finished" podID="520610a0-97e8-45ed-8020-952d9d4501b1" containerID="903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e" exitCode=0 Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.938721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"520610a0-97e8-45ed-8020-952d9d4501b1","Type":"ContainerDied","Data":"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.938792 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"520610a0-97e8-45ed-8020-952d9d4501b1","Type":"ContainerDied","Data":"b23e82f87a93924d5586ecb18b1e0c8a8d70b2c3e3672f1e477dd3c3a082d93c"} Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.938835 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2hvzv" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="registry-server" containerID="cri-o://8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d" gracePeriod=2 Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.938887 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.939429 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d2ca-account-create-update-wvmph" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.948069 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl" (OuterVolumeSpecName: "kube-api-access-68pbl") pod "520610a0-97e8-45ed-8020-952d9d4501b1" (UID: "520610a0-97e8-45ed-8020-952d9d4501b1"). InnerVolumeSpecName "kube-api-access-68pbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.960839 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "520610a0-97e8-45ed-8020-952d9d4501b1" (UID: "520610a0-97e8-45ed-8020-952d9d4501b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.980400 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "520610a0-97e8-45ed-8020-952d9d4501b1" (UID: "520610a0-97e8-45ed-8020-952d9d4501b1"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:24 crc kubenswrapper[4893]: I0121 07:22:24.992979 4893 scope.go:117] "RemoveContainer" containerID="805ea082486a9771af6cebd7498e3962947faff7e48ac3cc9a7f4ffadd851b1a" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.031192 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.031573 4893 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/520610a0-97e8-45ed-8020-952d9d4501b1-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.031641 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68pbl\" (UniqueName: \"kubernetes.io/projected/520610a0-97e8-45ed-8020-952d9d4501b1-kube-api-access-68pbl\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.031714 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.031849 4893 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/520610a0-97e8-45ed-8020-952d9d4501b1-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.122406 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.126483 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.132594 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle\") pod \"9c8d3670-41c0-4649-8a2f-38b090638cac\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.132726 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8m2v\" (UniqueName: \"kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v\") pod \"9c8d3670-41c0-4649-8a2f-38b090638cac\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.132752 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data\") pod \"9c8d3670-41c0-4649-8a2f-38b090638cac\" (UID: \"9c8d3670-41c0-4649-8a2f-38b090638cac\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.139692 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v" (OuterVolumeSpecName: "kube-api-access-r8m2v") pod "9c8d3670-41c0-4649-8a2f-38b090638cac" (UID: "9c8d3670-41c0-4649-8a2f-38b090638cac"). InnerVolumeSpecName "kube-api-access-r8m2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.145607 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.162455 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.168775 4893 scope.go:117] "RemoveContainer" containerID="2ed56fea6ed96fd765f43737ab0141951ab632e2d98acd1cb85189751d716818" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.172787 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8d3670-41c0-4649-8a2f-38b090638cac" (UID: "9c8d3670-41c0-4649-8a2f-38b090638cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.179768 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.188719 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data" (OuterVolumeSpecName: "config-data") pod "9c8d3670-41c0-4649-8a2f-38b090638cac" (UID: "9c8d3670-41c0-4649-8a2f-38b090638cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.188895 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.199215 4893 scope.go:117] "RemoveContainer" containerID="cf025bb163e48ab531bc02302eeaab4063f97ae75eabc9949d6dec3d92a30857" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.211518 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7fc4c6bb88-6pfmp"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.235115 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.235152 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8m2v\" (UniqueName: \"kubernetes.io/projected/9c8d3670-41c0-4649-8a2f-38b090638cac-kube-api-access-r8m2v\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.235164 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8d3670-41c0-4649-8a2f-38b090638cac-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.244120 4893 scope.go:117] "RemoveContainer" containerID="fb8af694018c30b6b38db1c567cc9a482101811cee291371c4cbd5248400b963" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.278856 4893 scope.go:117] "RemoveContainer" containerID="b233f3f10881d6ab9bfb3f123d866143df46653ea77405b1477d41577b5b9d37" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.297015 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.306333 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d4ab-account-create-update-6nbg8"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.328370 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.338131 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.339117 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.340923 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data podName:89f70f50-3d66-4917-bfe2-1084a55e4eb9 nodeName:}" failed. No retries permitted until 2026-01-21 07:22:33.340900747 +0000 UTC m=+1694.571246649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data") pod "rabbitmq-server-0" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9") : configmap "rabbitmq-config-data" not found Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.346451 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.349086 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.355267 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.363736 4893 scope.go:117] "RemoveContainer" containerID="357413c2169766654a3f84ffb51b7dca2610fa69e2e67bc3239b3491d881ff66" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.372137 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d2ca-account-create-update-wvmph"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.382815 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d2ca-account-create-update-wvmph"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.392574 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.399665 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.401749 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.406606 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.406871 4893 scope.go:117] "RemoveContainer" containerID="e5a2c7e9e8afbc7e00e8fde7ad874e7b56174cc0c7a9869b437318952fda7126" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.411899 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.417679 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.424234 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.429044 4893 scope.go:117] "RemoveContainer" containerID="ae932dd21754883ff82584e62cef856bfa6cbc6aee915c47053feb942b516a54" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.429791 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.434155 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.434452 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.434698 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.434729 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.436035 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.437080 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.437908 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.437938 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439379 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts\") pod \"f6a58a1a-1345-46f4-bb93-7b748440724a\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439466 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-874pr\" (UniqueName: \"kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr\") pod \"f6a58a1a-1345-46f4-bb93-7b748440724a\" (UID: \"f6a58a1a-1345-46f4-bb93-7b748440724a\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439828 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6a58a1a-1345-46f4-bb93-7b748440724a" (UID: "f6a58a1a-1345-46f4-bb93-7b748440724a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439940 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aff46ce4-9e0d-4805-98fe-52b60b607877-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439960 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6a58a1a-1345-46f4-bb93-7b748440724a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.439996 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljn47\" (UniqueName: \"kubernetes.io/projected/aff46ce4-9e0d-4805-98fe-52b60b607877-kube-api-access-ljn47\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.452622 4893 scope.go:117] "RemoveContainer" containerID="31c581f13004a2f7815d44365eb034baed7c66ac483f7fa7c22317077d696c9a" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.456391 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr" (OuterVolumeSpecName: "kube-api-access-874pr") pod "f6a58a1a-1345-46f4-bb93-7b748440724a" (UID: "f6a58a1a-1345-46f4-bb93-7b748440724a"). InnerVolumeSpecName "kube-api-access-874pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.471188 4893 scope.go:117] "RemoveContainer" containerID="049e37b8b5a580dac1053dd24aa63d0528098d200b566e60ab78bd88f14de585" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.493264 4893 scope.go:117] "RemoveContainer" containerID="9af6af2cf0b6fc56ff8fff6040414d4c6371bd930a27e4d908e26718f4910e2e" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.519068 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.520225 4893 scope.go:117] "RemoveContainer" containerID="98a381bf3587dbe6a6decea70f6e5a06994af8d254a33bc9496fa0afb1283c8d" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.565985 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-874pr\" (UniqueName: \"kubernetes.io/projected/f6a58a1a-1345-46f4-bb93-7b748440724a-kube-api-access-874pr\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.664831 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.672687 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities\") pod \"e544fa30-133c-4728-a8c5-99084bcb4367\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.672752 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggbqk\" (UniqueName: \"kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk\") pod \"e544fa30-133c-4728-a8c5-99084bcb4367\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.672919 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content\") pod \"e544fa30-133c-4728-a8c5-99084bcb4367\" (UID: \"e544fa30-133c-4728-a8c5-99084bcb4367\") " Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.674838 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0622bcb2-e8ab-4e4b-a33f-64e48320b232" path="/var/lib/kubelet/pods/0622bcb2-e8ab-4e4b-a33f-64e48320b232/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.678095 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187a5c9a-e642-4826-8543-f53fd0789757" path="/var/lib/kubelet/pods/187a5c9a-e642-4826-8543-f53fd0789757/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.679221 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities" (OuterVolumeSpecName: "utilities") pod "e544fa30-133c-4728-a8c5-99084bcb4367" (UID: "e544fa30-133c-4728-a8c5-99084bcb4367"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.680574 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" path="/var/lib/kubelet/pods/1dd69159-4b4b-4b13-aaa2-7b9edf7c468a/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.681174 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2caca0fd-0f3f-4725-a196-04463abed671" path="/var/lib/kubelet/pods/2caca0fd-0f3f-4725-a196-04463abed671/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.681994 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" path="/var/lib/kubelet/pods/41fc2d9b-17e4-42b0-bcee-065a237b513c/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.684626 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45545422-414a-433a-9de9-fbfb6e03add3" path="/var/lib/kubelet/pods/45545422-414a-433a-9de9-fbfb6e03add3/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.686215 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" path="/var/lib/kubelet/pods/4b445f12-f3bf-41d9-91f9-56def2b2694b/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.686438 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk" (OuterVolumeSpecName: "kube-api-access-ggbqk") pod "e544fa30-133c-4728-a8c5-99084bcb4367" (UID: "e544fa30-133c-4728-a8c5-99084bcb4367"). InnerVolumeSpecName "kube-api-access-ggbqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.687654 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" path="/var/lib/kubelet/pods/520610a0-97e8-45ed-8020-952d9d4501b1/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.689961 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63916786-c676-4695-84a1-3d3be685de16" path="/var/lib/kubelet/pods/63916786-c676-4695-84a1-3d3be685de16/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.690661 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" path="/var/lib/kubelet/pods/740cac4e-ecd7-4752-9d29-4adb1a14577b/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.692798 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff46ce4-9e0d-4805-98fe-52b60b607877" path="/var/lib/kubelet/pods/aff46ce4-9e0d-4805-98fe-52b60b607877/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.694017 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc" path="/var/lib/kubelet/pods/bcd1e197-57ed-4f7c-8be7-b59d0d3e08dc/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.694595 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d547505a-34d0-4645-9690-74df58728a46" path="/var/lib/kubelet/pods/d547505a-34d0-4645-9690-74df58728a46/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.695840 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" path="/var/lib/kubelet/pods/d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.696956 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" path="/var/lib/kubelet/pods/f7722b5d-ba92-4332-93c7-bc3aa9bfdb33/volumes" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.700005 4893 scope.go:117] "RemoveContainer" containerID="903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.704199 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" probeResult="failure" output=< Jan 21 07:22:25 crc kubenswrapper[4893]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 21 07:22:25 crc kubenswrapper[4893]: > Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.733906 4893 scope.go:117] "RemoveContainer" containerID="903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e" Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.734495 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e\": container with ID starting with 903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e not found: ID does not exist" containerID="903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.734566 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e"} err="failed to get container status \"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e\": rpc error: code = NotFound desc = could not find container \"903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e\": container with ID starting with 903c82795e7a99adb1f118a16af4579fab0871c6da09160b60ba62ce90ba5b7e not found: ID does not exist" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.742008 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e544fa30-133c-4728-a8c5-99084bcb4367" (UID: "e544fa30-133c-4728-a8c5-99084bcb4367"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.776398 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.776426 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggbqk\" (UniqueName: \"kubernetes.io/projected/e544fa30-133c-4728-a8c5-99084bcb4367-kube-api-access-ggbqk\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: I0121 07:22:25.776436 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e544fa30-133c-4728-a8c5-99084bcb4367-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.777919 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.782806 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.784005 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 07:22:25 crc kubenswrapper[4893]: E0121 07:22:25.784039 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.974284 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pvg9n" event={"ID":"f6a58a1a-1345-46f4-bb93-7b748440724a","Type":"ContainerDied","Data":"226351752bfb0f8672531fef671a9fed736a5468865ec4ffa5b69d1e2885c0d3"} Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.974390 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pvg9n" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.983990 4893 generic.go:334] "Generic (PLEG): container finished" podID="e544fa30-133c-4728-a8c5-99084bcb4367" containerID="8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d" exitCode=0 Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.984038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerDied","Data":"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d"} Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.984059 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2hvzv" event={"ID":"e544fa30-133c-4728-a8c5-99084bcb4367","Type":"ContainerDied","Data":"d4aba9439ec65e3f7bb6752d2af7b9ddeef8ba0ff178ef3932bb7a0d47cb3aa8"} Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.984077 4893 scope.go:117] "RemoveContainer" containerID="8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.984166 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2hvzv" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.990171 4893 generic.go:334] "Generic (PLEG): container finished" podID="5cc7c949-b993-484e-8e07-778a72743679" containerID="aee8a6ea9a77f904909aaaa7e5b406eb695daf2df6664ab2f71b0577e981db2c" exitCode=0 Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:25.990246 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerDied","Data":"aee8a6ea9a77f904909aaaa7e5b406eb695daf2df6664ab2f71b0577e981db2c"} Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.000735 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.079899 4893 scope.go:117] "RemoveContainer" containerID="b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.088131 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.095585 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2hvzv"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.108074 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.114483 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.142187 4893 scope.go:117] "RemoveContainer" containerID="54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.144930 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.156969 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pvg9n"] Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.183642 4893 scope.go:117] "RemoveContainer" containerID="8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d" Jan 21 07:22:26 crc kubenswrapper[4893]: E0121 07:22:26.184242 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d\": container with ID starting with 8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d not found: ID does not exist" containerID="8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.184309 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d"} err="failed to get container status \"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d\": rpc error: code = NotFound desc = could not find container \"8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d\": container with ID starting with 8a4fce4f9b725e2628eecfe0130d0d88ca8af020d3290640c0a98afd2910bb5d not found: ID does not exist" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.184350 4893 scope.go:117] "RemoveContainer" containerID="b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f" Jan 21 07:22:26 crc kubenswrapper[4893]: E0121 07:22:26.184647 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f\": container with ID starting with b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f not found: ID does not exist" containerID="b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.184696 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f"} err="failed to get container status \"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f\": rpc error: code = NotFound desc = could not find container \"b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f\": container with ID starting with b9f1972dc1bbeb85518ca96f4065df10dd15a0205179796fa3033162eba4306f not found: ID does not exist" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.184724 4893 scope.go:117] "RemoveContainer" containerID="54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8" Jan 21 07:22:26 crc kubenswrapper[4893]: E0121 07:22:26.184959 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8\": container with ID starting with 54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8 not found: ID does not exist" containerID="54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.184995 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8"} err="failed to get container status \"54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8\": rpc error: code = NotFound desc = could not find container \"54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8\": container with ID starting with 54c59f8b4fc31fd57e19b4cca37234c3a621a90067cbc6698ab49c99600e59f8 not found: ID does not exist" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.194583 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.293792 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.293888 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.293963 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.294122 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.294242 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q96td\" (UniqueName: \"kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.294294 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.294347 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.294459 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated\") pod \"5cc7c949-b993-484e-8e07-778a72743679\" (UID: \"5cc7c949-b993-484e-8e07-778a72743679\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.296433 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.297003 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.297491 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.299328 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.299719 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td" (OuterVolumeSpecName: "kube-api-access-q96td") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "kube-api-access-q96td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.308937 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.317642 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.334131 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "5cc7c949-b993-484e-8e07-778a72743679" (UID: "5cc7c949-b993-484e-8e07-778a72743679"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.368549 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ac0b6d79-4e8e-499d-afef-53b42511af46/ovn-northd/0.log" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.368624 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418102 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llfxj\" (UniqueName: \"kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418201 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418231 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418291 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418345 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418405 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418435 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle\") pod \"ac0b6d79-4e8e-499d-afef-53b42511af46\" (UID: \"ac0b6d79-4e8e-499d-afef-53b42511af46\") " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418767 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cc7c949-b993-484e-8e07-778a72743679-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418779 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418787 4893 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc7c949-b993-484e-8e07-778a72743679-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418807 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418817 4893 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418826 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q96td\" (UniqueName: \"kubernetes.io/projected/5cc7c949-b993-484e-8e07-778a72743679-kube-api-access-q96td\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418841 4893 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418849 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cc7c949-b993-484e-8e07-778a72743679-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.418925 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.419210 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config" (OuterVolumeSpecName: "config") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.419381 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts" (OuterVolumeSpecName: "scripts") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.438120 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.444938 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj" (OuterVolumeSpecName: "kube-api-access-llfxj") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "kube-api-access-llfxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.467409 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.508704 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.512384 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ac0b6d79-4e8e-499d-afef-53b42511af46" (UID: "ac0b6d79-4e8e-499d-afef-53b42511af46"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521740 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521775 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521784 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521793 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521804 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llfxj\" (UniqueName: \"kubernetes.io/projected/ac0b6d79-4e8e-499d-afef-53b42511af46-kube-api-access-llfxj\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521812 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521822 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac0b6d79-4e8e-499d-afef-53b42511af46-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.521831 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0b6d79-4e8e-499d-afef-53b42511af46-config\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.947445 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 07:22:26 crc kubenswrapper[4893]: E0121 07:22:26.957511 4893 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 07:22:26 crc kubenswrapper[4893]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T07:22:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 07:22:26 crc kubenswrapper[4893]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 21 07:22:26 crc kubenswrapper[4893]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-dfvzw" message=< Jan 21 07:22:26 crc kubenswrapper[4893]: Exiting ovn-controller (1) [FAILED] Jan 21 07:22:26 crc kubenswrapper[4893]: Killing ovn-controller (1) [ OK ] Jan 21 07:22:26 crc kubenswrapper[4893]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 21 07:22:26 crc kubenswrapper[4893]: 2026-01-21T07:22:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 07:22:26 crc kubenswrapper[4893]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 21 07:22:26 crc kubenswrapper[4893]: > Jan 21 07:22:26 crc kubenswrapper[4893]: E0121 07:22:26.957562 4893 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 07:22:26 crc kubenswrapper[4893]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T07:22:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 07:22:26 crc kubenswrapper[4893]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 21 07:22:26 crc kubenswrapper[4893]: > pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" containerID="cri-o://1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79" Jan 21 07:22:26 crc kubenswrapper[4893]: I0121 07:22:26.957626 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-dfvzw" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" containerID="cri-o://1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79" gracePeriod=22 Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.012335 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5cc7c949-b993-484e-8e07-778a72743679","Type":"ContainerDied","Data":"e15af96d6432f439ab2de49d8285b5ce0dd190b61240020f5e6f26b873a11a29"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.012395 4893 scope.go:117] "RemoveContainer" containerID="aee8a6ea9a77f904909aaaa7e5b406eb695daf2df6664ab2f71b0577e981db2c" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.012528 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.021320 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ac0b6d79-4e8e-499d-afef-53b42511af46/ovn-northd/0.log" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.021369 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" exitCode=139 Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.021417 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerDied","Data":"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.021444 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ac0b6d79-4e8e-499d-afef-53b42511af46","Type":"ContainerDied","Data":"7fb47672401c812d658bd89f89fc38d3f266874a82b2c77f5e931c9c3efb910b"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.021497 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.028029 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dfvzw_80680178-a1d2-4135-8949-881dc7ac92ea/ovn-controller/0.log" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.028073 4893 generic.go:334] "Generic (PLEG): container finished" podID="80680178-a1d2-4135-8949-881dc7ac92ea" containerID="1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79" exitCode=137 Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.028132 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw" event={"ID":"80680178-a1d2-4135-8949-881dc7ac92ea","Type":"ContainerDied","Data":"1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.030217 4893 generic.go:334] "Generic (PLEG): container finished" podID="1cfa1d66-684f-43de-b751-1da2399d48ee" containerID="fb79006d33020516a4f0e2561b74cb58a9f9a5735dfedb4b98b82f935997165d" exitCode=0 Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.030286 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9c4957-2lblr" event={"ID":"1cfa1d66-684f-43de-b751-1da2399d48ee","Type":"ContainerDied","Data":"fb79006d33020516a4f0e2561b74cb58a9f9a5735dfedb4b98b82f935997165d"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032653 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032711 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsn4l\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032757 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032820 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032917 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032948 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.032984 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.033048 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.033094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.033154 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data\") pod \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\" (UID: \"89f70f50-3d66-4917-bfe2-1084a55e4eb9\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034191 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034579 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034797 4893 generic.go:334] "Generic (PLEG): container finished" podID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerID="f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa" exitCode=0 Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034830 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerDied","Data":"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034855 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89f70f50-3d66-4917-bfe2-1084a55e4eb9","Type":"ContainerDied","Data":"4e6d5b4ed0150b0ebdcc26314171f7ee394430adfee148a08e44670d0b079434"} Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.034918 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.037166 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.041520 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.042841 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.044176 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l" (OuterVolumeSpecName: "kube-api-access-jsn4l") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "kube-api-access-jsn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.047326 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.049592 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info" (OuterVolumeSpecName: "pod-info") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.051917 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.064165 4893 scope.go:117] "RemoveContainer" containerID="3c9c4b7ec23de6d4db312920908e7cffbafd4003f59ab08b55326b661892a4bc" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.070236 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.077090 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174234 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174304 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174336 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174377 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174421 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174446 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174516 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6d5c\" (UniqueName: \"kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174566 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs\") pod \"1cfa1d66-684f-43de-b751-1da2399d48ee\" (UID: \"1cfa1d66-684f-43de-b751-1da2399d48ee\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174935 4893 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174948 4893 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89f70f50-3d66-4917-bfe2-1084a55e4eb9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174957 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174966 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsn4l\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-kube-api-access-jsn4l\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174976 4893 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89f70f50-3d66-4917-bfe2-1084a55e4eb9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.174985 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.175007 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.175017 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.187054 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data" (OuterVolumeSpecName: "config-data") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.193295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.194897 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts" (OuterVolumeSpecName: "scripts") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.199899 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c" (OuterVolumeSpecName: "kube-api-access-n6d5c") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "kube-api-access-n6d5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.209480 4893 scope.go:117] "RemoveContainer" containerID="85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.220087 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.221324 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.227524 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf" (OuterVolumeSpecName: "server-conf") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.229756 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.233226 4893 scope.go:117] "RemoveContainer" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.238323 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.241970 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.249416 4893 scope.go:117] "RemoveContainer" containerID="85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1" Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.249992 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1\": container with ID starting with 85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1 not found: ID does not exist" containerID="85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.250046 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1"} err="failed to get container status \"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1\": rpc error: code = NotFound desc = could not find container \"85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1\": container with ID starting with 85db962aefff38c722556849ce7c8f650d56593c154442c19394da5686adb8c1 not found: ID does not exist" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.250086 4893 scope.go:117] "RemoveContainer" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.250590 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477\": container with ID starting with d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477 not found: ID does not exist" containerID="d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.250629 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477"} err="failed to get container status \"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477\": rpc error: code = NotFound desc = could not find container \"d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477\": container with ID starting with d39ecdf969ee720a73564c221653cc20ea5438c4130dd242b39b2895ba9d5477 not found: ID does not exist" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.250653 4893 scope.go:117] "RemoveContainer" containerID="f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.254552 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.256286 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data" (OuterVolumeSpecName: "config-data") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.257460 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1cfa1d66-684f-43de-b751-1da2399d48ee" (UID: "1cfa1d66-684f-43de-b751-1da2399d48ee"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.258794 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "89f70f50-3d66-4917-bfe2-1084a55e4eb9" (UID: "89f70f50-3d66-4917-bfe2-1084a55e4eb9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.272571 4893 scope.go:117] "RemoveContainer" containerID="8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276235 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276257 4893 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276269 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276277 4893 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89f70f50-3d66-4917-bfe2-1084a55e4eb9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276286 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6d5c\" (UniqueName: \"kubernetes.io/projected/1cfa1d66-684f-43de-b751-1da2399d48ee-kube-api-access-n6d5c\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276294 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89f70f50-3d66-4917-bfe2-1084a55e4eb9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276302 4893 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276310 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276319 4893 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276326 4893 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276334 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.276341 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfa1d66-684f-43de-b751-1da2399d48ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.295730 4893 scope.go:117] "RemoveContainer" containerID="f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa" Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.296254 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa\": container with ID starting with f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa not found: ID does not exist" containerID="f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.296298 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa"} err="failed to get container status \"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa\": rpc error: code = NotFound desc = could not find container \"f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa\": container with ID starting with f5932c3efdbfd6885c93667c024f13d35e7e8335300761d7cf6bcc9553b87aaa not found: ID does not exist" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.296325 4893 scope.go:117] "RemoveContainer" containerID="8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06" Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.296896 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06\": container with ID starting with 8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06 not found: ID does not exist" containerID="8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.296920 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06"} err="failed to get container status \"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06\": rpc error: code = NotFound desc = could not find container \"8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06\": container with ID starting with 8bc18f5c5b3a7199e36a32b48eb54e74da7f96ba56d7cedfcbbe95f361423f06 not found: ID does not exist" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.377471 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.383501 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.409995 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dfvzw_80680178-a1d2-4135-8949-881dc7ac92ea/ovn-controller/0.log" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.410078 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.580847 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581055 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581142 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrk5\" (UniqueName: \"kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581209 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581240 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581294 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581356 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn\") pod \"80680178-a1d2-4135-8949-881dc7ac92ea\" (UID: \"80680178-a1d2-4135-8949-881dc7ac92ea\") " Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.581835 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.582636 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run" (OuterVolumeSpecName: "var-run") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.584544 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.585645 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts" (OuterVolumeSpecName: "scripts") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.599552 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5" (OuterVolumeSpecName: "kube-api-access-6rrk5") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "kube-api-access-6rrk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.634065 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc7c949-b993-484e-8e07-778a72743679" path="/var/lib/kubelet/pods/5cc7c949-b993-484e-8e07-778a72743679/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.635306 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" path="/var/lib/kubelet/pods/89f70f50-3d66-4917-bfe2-1084a55e4eb9/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.636787 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" path="/var/lib/kubelet/pods/9c8d3670-41c0-4649-8a2f-38b090638cac/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.637795 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" path="/var/lib/kubelet/pods/ac0b6d79-4e8e-499d-afef-53b42511af46/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.638810 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" path="/var/lib/kubelet/pods/e544fa30-133c-4728-a8c5-99084bcb4367/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.640970 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a58a1a-1345-46f4-bb93-7b748440724a" path="/var/lib/kubelet/pods/f6a58a1a-1345-46f4-bb93-7b748440724a/volumes" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.649531 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683647 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80680178-a1d2-4135-8949-881dc7ac92ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683699 4893 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683709 4893 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683717 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683726 4893 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80680178-a1d2-4135-8949-881dc7ac92ea-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.683733 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrk5\" (UniqueName: \"kubernetes.io/projected/80680178-a1d2-4135-8949-881dc7ac92ea-kube-api-access-6rrk5\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.701630 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "80680178-a1d2-4135-8949-881dc7ac92ea" (UID: "80680178-a1d2-4135-8949-881dc7ac92ea"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.785208 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/80680178-a1d2-4135-8949-881dc7ac92ea-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.785316 4893 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:27 crc kubenswrapper[4893]: E0121 07:22:27.785374 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data podName:fdb40d40-7926-424a-810d-3b6f77e1022f nodeName:}" failed. No retries permitted until 2026-01-21 07:22:35.785358421 +0000 UTC m=+1697.015704323 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data") pod "rabbitmq-cell1-server-0" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 07:22:27 crc kubenswrapper[4893]: I0121 07:22:27.948022 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.038371 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.052449 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9c4957-2lblr" event={"ID":"1cfa1d66-684f-43de-b751-1da2399d48ee","Type":"ContainerDied","Data":"24ddeef2326940ce72c79871561487f16be27f8317b41bd533f41fba741bbc5b"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.052530 4893 scope.go:117] "RemoveContainer" containerID="fb79006d33020516a4f0e2561b74cb58a9f9a5735dfedb4b98b82f935997165d" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.052657 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9c4957-2lblr" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.070155 4893 generic.go:334] "Generic (PLEG): container finished" podID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerID="fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae" exitCode=0 Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.070249 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerDied","Data":"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.070276 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fdb40d40-7926-424a-810d-3b6f77e1022f","Type":"ContainerDied","Data":"767d15ff2a6bea44bf05d493a5b3ec1389e577bcc68f4aa1efa04d46a7167d21"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.070326 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.074254 4893 generic.go:334] "Generic (PLEG): container finished" podID="04e84192-2873-4f45-855d-d755d99e7946" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" exitCode=0 Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.074333 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04e84192-2873-4f45-855d-d755d99e7946","Type":"ContainerDied","Data":"2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.081116 4893 generic.go:334] "Generic (PLEG): container finished" podID="f891af55-ec46-4261-9f5e-01a1c181f194" containerID="3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a" exitCode=0 Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.081203 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerDied","Data":"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.081272 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f891af55-ec46-4261-9f5e-01a1c181f194","Type":"ContainerDied","Data":"857ec3e5a04525e198441dbcc5bc0eacf93e717d133a810829ca49fe04c84bc4"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.081358 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097226 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097347 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097376 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097406 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097424 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097508 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097535 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz5lg\" (UniqueName: \"kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.097582 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd\") pod \"f891af55-ec46-4261-9f5e-01a1c181f194\" (UID: \"f891af55-ec46-4261-9f5e-01a1c181f194\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.098359 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.098629 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.098976 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.099555 4893 scope.go:117] "RemoveContainer" containerID="fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.100519 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dfvzw_80680178-a1d2-4135-8949-881dc7ac92ea/ovn-controller/0.log" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.100638 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dfvzw" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.101589 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dfvzw" event={"ID":"80680178-a1d2-4135-8949-881dc7ac92ea","Type":"ContainerDied","Data":"e2723075f52f2c3d1aca94260b3ed49da8105c5c53b5c3f888d2a7656cfe3ba0"} Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.109968 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg" (OuterVolumeSpecName: "kube-api-access-nz5lg") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "kube-api-access-nz5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.112244 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts" (OuterVolumeSpecName: "scripts") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.113620 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9fd9c4957-2lblr"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.125458 4893 scope.go:117] "RemoveContainer" containerID="afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.144350 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.147438 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.149836 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.153242 4893 scope.go:117] "RemoveContainer" containerID="fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.153906 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae\": container with ID starting with fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae not found: ID does not exist" containerID="fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.153978 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae"} err="failed to get container status \"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae\": rpc error: code = NotFound desc = could not find container \"fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae\": container with ID starting with fd6edf6018574b9a1c8c87a2c1c4f22c5ad783bb05f0b5bd5d6a157bcdf570ae not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.154014 4893 scope.go:117] "RemoveContainer" containerID="afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.154548 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b\": container with ID starting with afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b not found: ID does not exist" containerID="afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.154631 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b"} err="failed to get container status \"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b\": rpc error: code = NotFound desc = could not find container \"afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b\": container with ID starting with afb234ccf5a3b8f9be3af40af000806304b977b9072b8c61b82cc2c703dc8d0b not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.154706 4893 scope.go:117] "RemoveContainer" containerID="3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.157168 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dfvzw"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.163853 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.168732 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.173484 4893 scope.go:117] "RemoveContainer" containerID="dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.198931 4893 scope.go:117] "RemoveContainer" containerID="3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.203951 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5fqr\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204177 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204264 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204334 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204495 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204644 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204709 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204777 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204804 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.204877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins\") pod \"fdb40d40-7926-424a-810d-3b6f77e1022f\" (UID: \"fdb40d40-7926-424a-810d-3b6f77e1022f\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205246 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205807 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205833 4893 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205870 4893 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205881 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205892 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205905 4893 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205915 4893 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f891af55-ec46-4261-9f5e-01a1c181f194-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.205944 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz5lg\" (UniqueName: \"kubernetes.io/projected/f891af55-ec46-4261-9f5e-01a1c181f194-kube-api-access-nz5lg\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.206024 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.206435 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.207973 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr" (OuterVolumeSpecName: "kube-api-access-t5fqr") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "kube-api-access-t5fqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.208740 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data" (OuterVolumeSpecName: "config-data") pod "f891af55-ec46-4261-9f5e-01a1c181f194" (UID: "f891af55-ec46-4261-9f5e-01a1c181f194"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.211196 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.211270 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.214161 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info" (OuterVolumeSpecName: "pod-info") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.220891 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.223939 4893 scope.go:117] "RemoveContainer" containerID="ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.229512 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data" (OuterVolumeSpecName: "config-data") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.242491 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf" (OuterVolumeSpecName: "server-conf") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.243632 4893 scope.go:117] "RemoveContainer" containerID="3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.244169 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e\": container with ID starting with 3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e not found: ID does not exist" containerID="3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.244223 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e"} err="failed to get container status \"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e\": rpc error: code = NotFound desc = could not find container \"3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e\": container with ID starting with 3a24623a75e32ef570ca14893ccdb6089f419296939cd0c5276caec748921d6e not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.244265 4893 scope.go:117] "RemoveContainer" containerID="dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.244858 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60\": container with ID starting with dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60 not found: ID does not exist" containerID="dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.244898 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60"} err="failed to get container status \"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60\": rpc error: code = NotFound desc = could not find container \"dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60\": container with ID starting with dd3970844ff87242006efef7f85a07b1307bc7fcf1c2b53f8a03f6f42dcb3a60 not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.244923 4893 scope.go:117] "RemoveContainer" containerID="3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.245338 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a\": container with ID starting with 3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a not found: ID does not exist" containerID="3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.245384 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a"} err="failed to get container status \"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a\": rpc error: code = NotFound desc = could not find container \"3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a\": container with ID starting with 3bb1f3a7d2d6b35737c02944ddbf53eb946eb7cc400a59439dbd01bed9d2650a not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.245415 4893 scope.go:117] "RemoveContainer" containerID="ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d" Jan 21 07:22:28 crc kubenswrapper[4893]: E0121 07:22:28.245858 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d\": container with ID starting with ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d not found: ID does not exist" containerID="ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.245895 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d"} err="failed to get container status \"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d\": rpc error: code = NotFound desc = could not find container \"ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d\": container with ID starting with ad34c1fe2a091616f40d2e25e67f46102e054ffaf965ca71fc5193bf96e1733d not found: ID does not exist" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.245911 4893 scope.go:117] "RemoveContainer" containerID="1e8fba93ba68503252ec9f557809dd4ae5415e79adbfc9b32997fb9b75ac0b79" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.290595 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fdb40d40-7926-424a-810d-3b6f77e1022f" (UID: "fdb40d40-7926-424a-810d-3b6f77e1022f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.306904 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data\") pod \"04e84192-2873-4f45-855d-d755d99e7946\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307092 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle\") pod \"04e84192-2873-4f45-855d-d755d99e7946\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307187 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl6cq\" (UniqueName: \"kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq\") pod \"04e84192-2873-4f45-855d-d755d99e7946\" (UID: \"04e84192-2873-4f45-855d-d755d99e7946\") " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307563 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5fqr\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-kube-api-access-t5fqr\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307589 4893 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdb40d40-7926-424a-810d-3b6f77e1022f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307603 4893 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307615 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307627 4893 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307642 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f891af55-ec46-4261-9f5e-01a1c181f194-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307653 4893 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdb40d40-7926-424a-810d-3b6f77e1022f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307664 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdb40d40-7926-424a-810d-3b6f77e1022f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307692 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307719 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.307732 4893 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdb40d40-7926-424a-810d-3b6f77e1022f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.312082 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq" (OuterVolumeSpecName: "kube-api-access-bl6cq") pod "04e84192-2873-4f45-855d-d755d99e7946" (UID: "04e84192-2873-4f45-855d-d755d99e7946"). InnerVolumeSpecName "kube-api-access-bl6cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.326496 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data" (OuterVolumeSpecName: "config-data") pod "04e84192-2873-4f45-855d-d755d99e7946" (UID: "04e84192-2873-4f45-855d-d755d99e7946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.329751 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04e84192-2873-4f45-855d-d755d99e7946" (UID: "04e84192-2873-4f45-855d-d755d99e7946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.331533 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.410718 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.410760 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.410774 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl6cq\" (UniqueName: \"kubernetes.io/projected/04e84192-2873-4f45-855d-d755d99e7946-kube-api-access-bl6cq\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.410789 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e84192-2873-4f45-855d-d755d99e7946-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.433127 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.443926 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.454288 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:22:28 crc kubenswrapper[4893]: I0121 07:22:28.461991 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.224214 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04e84192-2873-4f45-855d-d755d99e7946","Type":"ContainerDied","Data":"6dd06ee1e46536661d1fe85df7ddcff590bb2c5998939704f0bbc825bb78d209"} Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.224270 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.224288 4893 scope.go:117] "RemoveContainer" containerID="2c1520ddf2448545568bfad1712f1cbe491d42f3fe5bd60c6b96dce8d4a01c86" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.262169 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.267181 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.628637 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e84192-2873-4f45-855d-d755d99e7946" path="/var/lib/kubelet/pods/04e84192-2873-4f45-855d-d755d99e7946/volumes" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.629320 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfa1d66-684f-43de-b751-1da2399d48ee" path="/var/lib/kubelet/pods/1cfa1d66-684f-43de-b751-1da2399d48ee/volumes" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.630041 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" path="/var/lib/kubelet/pods/80680178-a1d2-4135-8949-881dc7ac92ea/volumes" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.631568 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" path="/var/lib/kubelet/pods/f891af55-ec46-4261-9f5e-01a1c181f194/volumes" Jan 21 07:22:29 crc kubenswrapper[4893]: I0121 07:22:29.632763 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" path="/var/lib/kubelet/pods/fdb40d40-7926-424a-810d-3b6f77e1022f/volumes" Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.433988 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.434920 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.435664 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.435757 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.436383 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.439548 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.441905 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:30 crc kubenswrapper[4893]: E0121 07:22:30.441991 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.434166 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.435325 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.435849 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.435935 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.436982 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.439024 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.441286 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:35 crc kubenswrapper[4893]: E0121 07:22:35.441375 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:38 crc kubenswrapper[4893]: I0121 07:22:38.580996 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:22:38 crc kubenswrapper[4893]: E0121 07:22:38.582164 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.434898 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.436022 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.436474 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.436525 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.436817 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.439333 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.441308 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:40 crc kubenswrapper[4893]: E0121 07:22:40.441380 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.433870 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.435619 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.436418 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.436505 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.436972 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.438282 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.439915 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 07:22:45 crc kubenswrapper[4893]: E0121 07:22:45.439965 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zvt96" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.283551 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zvt96_78d5f974-5570-4407-8dbe-7471ae98fd50/ovs-vswitchd/0.log" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.285362 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444012 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444077 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444146 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444183 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444282 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444344 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7zx\" (UniqueName: \"kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log\") pod \"78d5f974-5570-4407-8dbe-7471ae98fd50\" (UID: \"78d5f974-5570-4407-8dbe-7471ae98fd50\") " Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444449 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run" (OuterVolumeSpecName: "var-run") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.444638 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log" (OuterVolumeSpecName: "var-log") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.445136 4893 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.445167 4893 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.445187 4893 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.445335 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib" (OuterVolumeSpecName: "var-lib") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.445615 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts" (OuterVolumeSpecName: "scripts") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.454133 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx" (OuterVolumeSpecName: "kube-api-access-rl7zx") pod "78d5f974-5570-4407-8dbe-7471ae98fd50" (UID: "78d5f974-5570-4407-8dbe-7471ae98fd50"). InnerVolumeSpecName "kube-api-access-rl7zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.547163 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78d5f974-5570-4407-8dbe-7471ae98fd50-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.547229 4893 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/78d5f974-5570-4407-8dbe-7471ae98fd50-var-lib\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.547249 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7zx\" (UniqueName: \"kubernetes.io/projected/78d5f974-5570-4407-8dbe-7471ae98fd50-kube-api-access-rl7zx\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.766769 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zvt96_78d5f974-5570-4407-8dbe-7471ae98fd50/ovs-vswitchd/0.log" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.768522 4893 generic.go:334] "Generic (PLEG): container finished" podID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" exitCode=137 Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.768823 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerDied","Data":"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351"} Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.768719 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zvt96" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.769107 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zvt96" event={"ID":"78d5f974-5570-4407-8dbe-7471ae98fd50","Type":"ContainerDied","Data":"c8e7d2ddaea979663df553c4b5e9392d4ab7f1a7c28eb6b598fb8c5772fbb88f"} Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.769339 4893 scope.go:117] "RemoveContainer" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.790037 4893 generic.go:334] "Generic (PLEG): container finished" podID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerID="6a87281c1caeb6e4039eec07e768f22f9b309659361f188928eda3e3a1dbb21a" exitCode=137 Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.790277 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"6a87281c1caeb6e4039eec07e768f22f9b309659361f188928eda3e3a1dbb21a"} Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.808247 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.817006 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-zvt96"] Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.828608 4893 scope.go:117] "RemoveContainer" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.851013 4893 scope.go:117] "RemoveContainer" containerID="0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.891637 4893 scope.go:117] "RemoveContainer" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" Jan 21 07:22:49 crc kubenswrapper[4893]: E0121 07:22:49.892593 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351\": container with ID starting with 641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351 not found: ID does not exist" containerID="641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.892731 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351"} err="failed to get container status \"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351\": rpc error: code = NotFound desc = could not find container \"641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351\": container with ID starting with 641f45d881156d21fd8815cd5b5efbac82f8de33d1a526e07cb2065a85cb4351 not found: ID does not exist" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.892817 4893 scope.go:117] "RemoveContainer" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" Jan 21 07:22:49 crc kubenswrapper[4893]: E0121 07:22:49.893491 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3\": container with ID starting with ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 not found: ID does not exist" containerID="ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.893595 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3"} err="failed to get container status \"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3\": rpc error: code = NotFound desc = could not find container \"ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3\": container with ID starting with ee2ac52ca03e9ba8604209edd0e24ede0af7849c83ac6195ee87a7943fa359b3 not found: ID does not exist" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.893698 4893 scope.go:117] "RemoveContainer" containerID="0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6" Jan 21 07:22:49 crc kubenswrapper[4893]: E0121 07:22:49.894358 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6\": container with ID starting with 0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6 not found: ID does not exist" containerID="0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6" Jan 21 07:22:49 crc kubenswrapper[4893]: I0121 07:22:49.894447 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6"} err="failed to get container status \"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6\": rpc error: code = NotFound desc = could not find container \"0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6\": container with ID starting with 0902eba70db7f26491e33d8c0ee5f46c52338fff31166adc692390c1065669c6 not found: ID does not exist" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.313100 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.463618 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock\") pod \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.463781 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache\") pod \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.463859 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") pod \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.464009 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.464098 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fh72\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72\") pod \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\" (UID: \"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b\") " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.464318 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock" (OuterVolumeSpecName: "lock") pod "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.464524 4893 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-lock\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.464873 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache" (OuterVolumeSpecName: "cache") pod "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.469917 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "swift") pod "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.470021 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.470431 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72" (OuterVolumeSpecName: "kube-api-access-9fh72") pod "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" (UID: "1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b"). InnerVolumeSpecName "kube-api-access-9fh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.565876 4893 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.565937 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fh72\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-kube-api-access-9fh72\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.565962 4893 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-cache\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.565980 4893 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.760805 4893 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.812060 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b","Type":"ContainerDied","Data":"be7235450e40f3c081de47d36e67875962a250359752df40f0788f5e6c402593"} Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.812125 4893 scope.go:117] "RemoveContainer" containerID="6a87281c1caeb6e4039eec07e768f22f9b309659361f188928eda3e3a1dbb21a" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.812285 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.846346 4893 scope.go:117] "RemoveContainer" containerID="7b5731c8b577d290be2d86e362e9bb9f2c16bff9031dd2e710aba07ac2ce04ed" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.847696 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.857202 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.859016 4893 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.864260 4893 scope.go:117] "RemoveContainer" containerID="463d18ccd25d3b9dfd2bc47bf68e566d842db8f27cec0e30693b206ff7b49443" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.888016 4893 scope.go:117] "RemoveContainer" containerID="99f8c2ddbd19e36b260905f52c96953335f374caada62eaa5e2f0f5d967d416d" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.905637 4893 scope.go:117] "RemoveContainer" containerID="8d08cf04d81866f12ef2bd434ff7bf4f3ff11d56786d98d0d1fac803ddd360ca" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.922996 4893 scope.go:117] "RemoveContainer" containerID="f63f20cab196ddeafd343f3658285555d44901b431007768efc93ae2a8129f02" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.942980 4893 scope.go:117] "RemoveContainer" containerID="bce00b11d38795c86e6d149154dd8aac8c079d3c7fa177fce0f83ff6166a6875" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.963401 4893 scope.go:117] "RemoveContainer" containerID="547e0cdd8689c56343dabedff5738e3f1d04a8d69d96acd746b136ec28002be6" Jan 21 07:22:50 crc kubenswrapper[4893]: I0121 07:22:50.985055 4893 scope.go:117] "RemoveContainer" containerID="28f0c95878d18811a355a7d69ad8da527d18fc77022addfeaf2830eb6d3f6a58" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.002212 4893 scope.go:117] "RemoveContainer" containerID="6c4ea7e3f7722a19ae4b7b9d432e39556b9257e17680f7182fa23f27573643bf" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.018061 4893 scope.go:117] "RemoveContainer" containerID="651e96881878d275dfe2a4a1c62471fd4cf86d8d8127d90f3d7087add5021953" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.034996 4893 scope.go:117] "RemoveContainer" containerID="10d753ac1428ba120d45a7811e9fca56f7ef4a1826bf444055d4ad6a929e369e" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.051087 4893 scope.go:117] "RemoveContainer" containerID="6b3241ca824451ec282b6865606bf40f2795c9f27b6217a6c0357120a18a6e9b" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.073991 4893 scope.go:117] "RemoveContainer" containerID="f9ccf20497fe8d76385ede790b0446d55927de6fa28eb3b5854f288b82fc7991" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.094659 4893 scope.go:117] "RemoveContainer" containerID="de3d17ba39098b400e960c4859abe64ec8453a5b7438c807895a08f576cb1c61" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.595095 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" path="/var/lib/kubelet/pods/1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b/volumes" Jan 21 07:22:51 crc kubenswrapper[4893]: I0121 07:22:51.599115 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" path="/var/lib/kubelet/pods/78d5f974-5570-4407-8dbe-7471ae98fd50/volumes" Jan 21 07:22:53 crc kubenswrapper[4893]: I0121 07:22:53.581742 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:22:53 crc kubenswrapper[4893]: E0121 07:22:53.582830 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:22:53 crc kubenswrapper[4893]: I0121 07:22:53.938041 4893 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1236d7dc-6a98-4d59-8a88-f3101bd017ef"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1236d7dc-6a98-4d59-8a88-f3101bd017ef] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1236d7dc_6a98_4d59_8a88_f3101bd017ef.slice" Jan 21 07:22:53 crc kubenswrapper[4893]: E0121 07:22:53.938125 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod1236d7dc-6a98-4d59-8a88-f3101bd017ef] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod1236d7dc-6a98-4d59-8a88-f3101bd017ef] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1236d7dc_6a98_4d59_8a88_f3101bd017ef.slice" pod="openstack/cinder-2981-account-create-update-v76jn" podUID="1236d7dc-6a98-4d59-8a88-f3101bd017ef" Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.006037 4893 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5b37865c-22cd-4288-b47b-ef9ef1f33646"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5b37865c-22cd-4288-b47b-ef9ef1f33646] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5b37865c_22cd_4288_b47b_ef9ef1f33646.slice" Jan 21 07:22:54 crc kubenswrapper[4893]: E0121 07:22:54.006105 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5b37865c-22cd-4288-b47b-ef9ef1f33646] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5b37865c-22cd-4288-b47b-ef9ef1f33646] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5b37865c_22cd_4288_b47b_ef9ef1f33646.slice" pod="openstack/openstack-cell1-galera-0" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.020793 4893 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2caca0fd-0f3f-4725-a196-04463abed671"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2caca0fd-0f3f-4725-a196-04463abed671] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2caca0fd_0f3f_4725_a196_04463abed671.slice" Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.874379 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.874379 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2981-account-create-update-v76jn" Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.969511 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.976849 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2981-account-create-update-v76jn"] Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.983368 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:22:54 crc kubenswrapper[4893]: I0121 07:22:54.991663 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 07:22:55 crc kubenswrapper[4893]: I0121 07:22:55.592429 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1236d7dc-6a98-4d59-8a88-f3101bd017ef" path="/var/lib/kubelet/pods/1236d7dc-6a98-4d59-8a88-f3101bd017ef/volumes" Jan 21 07:22:55 crc kubenswrapper[4893]: I0121 07:22:55.593850 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" path="/var/lib/kubelet/pods/5b37865c-22cd-4288-b47b-ef9ef1f33646/volumes" Jan 21 07:23:07 crc kubenswrapper[4893]: I0121 07:23:07.580810 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:23:07 crc kubenswrapper[4893]: E0121 07:23:07.581515 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:23:22 crc kubenswrapper[4893]: I0121 07:23:22.581595 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:23:22 crc kubenswrapper[4893]: E0121 07:23:22.582959 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.388867 4893 scope.go:117] "RemoveContainer" containerID="9ab02ce2de50211d9da3ffb737894cbb3f83a2978c45af1362cdfc15e9a83fe9" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.447432 4893 scope.go:117] "RemoveContainer" containerID="9f91de278662270db29f59cfbb23346d22c9e3004cfc529d77b9771ee747b909" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.471149 4893 scope.go:117] "RemoveContainer" containerID="e3cbe8a444dd54c0fcc9c3ee21ff05b32fc594b99ce96416c8c8c3e7e7c82a61" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.499447 4893 scope.go:117] "RemoveContainer" containerID="f4d654e94f24ab9bf45d470f0983ee4b3b63d8394960bf3c3b6ac0999fa9107d" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.531903 4893 scope.go:117] "RemoveContainer" containerID="5128b041afc1911bb176e952146e5a9ed2f71e3f76e31ea29d8460a37e12eae5" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.555257 4893 scope.go:117] "RemoveContainer" containerID="be6daeff1c75c5fb82a0f9b3bd2408b907b68938d3de911977a48fe1748fdb71" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.578699 4893 scope.go:117] "RemoveContainer" containerID="c4fe38b68118c34de33d63ec432bfb97337a100a6921fafdf5f9636ca503213d" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.602741 4893 scope.go:117] "RemoveContainer" containerID="df0b7f029214d1da2427ff39dae72339bfb4533b9b1d5f03b3c2403dfaf05ae1" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.624534 4893 scope.go:117] "RemoveContainer" containerID="891656a23e552f4191c271c0656f4e5f186283f5a0a5cbf39b3a5d9a84777610" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.646696 4893 scope.go:117] "RemoveContainer" containerID="6e632adf6d95b5e1192d3f90676fdfd289e0608633172e8fb3a756deba8c6a46" Jan 21 07:23:32 crc kubenswrapper[4893]: I0121 07:23:32.674552 4893 scope.go:117] "RemoveContainer" containerID="134c50e743d45e05c1508cfc12adfa144d43dde145e51fe7555d549c1ecc51ac" Jan 21 07:23:36 crc kubenswrapper[4893]: I0121 07:23:36.581457 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:23:36 crc kubenswrapper[4893]: E0121 07:23:36.582323 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:23:49 crc kubenswrapper[4893]: I0121 07:23:49.585480 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:23:49 crc kubenswrapper[4893]: E0121 07:23:49.587201 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:03 crc kubenswrapper[4893]: I0121 07:24:03.580915 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:24:03 crc kubenswrapper[4893]: E0121 07:24:03.582372 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:18 crc kubenswrapper[4893]: I0121 07:24:18.581001 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:24:18 crc kubenswrapper[4893]: E0121 07:24:18.581998 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:30 crc kubenswrapper[4893]: I0121 07:24:30.581337 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:24:30 crc kubenswrapper[4893]: E0121 07:24:30.582518 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:32 crc kubenswrapper[4893]: I0121 07:24:32.948346 4893 scope.go:117] "RemoveContainer" containerID="0caf1583fe45f9478ec6a759fa630eecb7221966a84a7d22d76435c4e7d1fba1" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.025238 4893 scope.go:117] "RemoveContainer" containerID="4753b38a6732fb0ac9424a2d22e75d9b2eaa0d229b73057f9c895b0a284de4ba" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.065988 4893 scope.go:117] "RemoveContainer" containerID="7a92121d556490617443ac88942399ee3d25ee6302e354bf063b9e489ec4c6fa" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.089523 4893 scope.go:117] "RemoveContainer" containerID="312c17d7007a17dc75ca6f883b3966c3666f60753c3b5e166435d88ffac4eb4d" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.106954 4893 scope.go:117] "RemoveContainer" containerID="97ddb3d225615594c9a605add1044c2cb99f516ef5f324d2b45eb06de4cb1505" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.130253 4893 scope.go:117] "RemoveContainer" containerID="f215f8d498420359c64aca99b2de79273f19ec7f8b4b742bb3bb89b42bf73cc0" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.168217 4893 scope.go:117] "RemoveContainer" containerID="fc3bc80e161dc6978eb1141fe2bc45dcd6639ddf7a96d78648df93508e6f8b96" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.211098 4893 scope.go:117] "RemoveContainer" containerID="6e04b4fbee9b3e703fb5de3ea31e81e68cd2ff62d9d541b6c2ee1f9927f27fdf" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.228263 4893 scope.go:117] "RemoveContainer" containerID="661a1510b0e926ed9d58cfc9a102f808fac860f1c22613744ef458127e547ca8" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.244933 4893 scope.go:117] "RemoveContainer" containerID="3e583830442cadc155ced0fc3acf18ce7212107354521b1fadfeae0a106231bc" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.276720 4893 scope.go:117] "RemoveContainer" containerID="7733667fd7c8cd7e55ff97256302b135ad30aee18a32d7e538c9ea12108ae3b1" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.317064 4893 scope.go:117] "RemoveContainer" containerID="42fb31863a351beff5a671105b6d00013cbdcea51ca1264d353f781aeb836f3b" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.347456 4893 scope.go:117] "RemoveContainer" containerID="98f97492814f86a56e1b1582cbf2a87660019e0cb9f8aa9789265d7d90cf2c62" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.365116 4893 scope.go:117] "RemoveContainer" containerID="0130a18b27166f791112ff30f58560d319f90531ebbeae689609a6794f09b9c7" Jan 21 07:24:33 crc kubenswrapper[4893]: I0121 07:24:33.395260 4893 scope.go:117] "RemoveContainer" containerID="5cf697339569b6905b4af0edeaed0b9a8480bee6dfdf516cd425bdcc946ee1f5" Jan 21 07:24:44 crc kubenswrapper[4893]: I0121 07:24:44.656984 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:24:44 crc kubenswrapper[4893]: E0121 07:24:44.657790 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:57 crc kubenswrapper[4893]: I0121 07:24:57.581639 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:24:57 crc kubenswrapper[4893]: E0121 07:24:57.582618 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040005 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040552 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040574 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040605 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" containerName="memcached" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040620 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" containerName="memcached" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040636 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040650 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040696 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040709 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040729 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040742 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040765 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040777 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040791 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040802 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040828 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="sg-core" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040840 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="sg-core" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040858 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-central-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040869 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-central-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040894 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040907 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040927 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040941 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.040973 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.040985 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041002 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="swift-recon-cron" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041014 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="swift-recon-cron" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041034 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" containerName="kube-state-metrics" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041046 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" containerName="kube-state-metrics" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041061 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041074 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041095 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041107 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041130 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041160 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041175 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041188 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041215 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server-init" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041227 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server-init" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041247 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="openstack-network-exporter" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041259 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="openstack-network-exporter" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041277 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-reaper" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041289 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-reaper" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041329 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041341 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041362 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041374 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041395 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041421 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-api" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041440 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041451 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041468 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041479 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041493 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041504 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041522 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerName="nova-cell1-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041535 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerName="nova-cell1-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041560 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="extract-utilities" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041571 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="extract-utilities" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041589 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="mysql-bootstrap" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041601 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="mysql-bootstrap" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041621 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041632 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041648 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-notification-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041660 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-notification-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041797 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041814 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041837 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="mysql-bootstrap" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041851 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="mysql-bootstrap" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041868 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041879 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041900 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041912 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041930 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041942 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.041965 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="rsync" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.041977 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="rsync" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042002 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042013 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042028 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042039 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042057 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042071 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042089 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042101 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042116 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042128 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042170 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="registry-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042183 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="registry-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042195 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042206 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042220 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="setup-container" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042231 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="setup-container" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042247 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="setup-container" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042287 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="setup-container" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042331 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042344 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042362 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042375 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042394 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042407 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042426 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042437 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-api" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042454 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042466 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042485 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-expirer" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042496 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-expirer" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042509 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042521 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042537 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfa1d66-684f-43de-b751-1da2399d48ee" containerName="keystone-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042549 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfa1d66-684f-43de-b751-1da2399d48ee" containerName="keystone-api" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042564 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042576 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-server" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042592 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042604 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: E0121 07:24:59.042622 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="extract-content" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042634 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="extract-content" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042927 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="ovn-northd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042945 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="520610a0-97e8-45ed-8020-952d9d4501b1" containerName="memcached" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042967 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.042986 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043009 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovsdb-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043023 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043037 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d5f974-5570-4407-8dbe-7471ae98fd50" containerName="ovs-vswitchd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043052 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="80680178-a1d2-4135-8949-881dc7ac92ea" containerName="ovn-controller" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043074 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd69159-4b4b-4b13-aaa2-7b9edf7c468a" containerName="kube-state-metrics" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043092 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="proxy-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043114 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0b6d79-4e8e-499d-afef-53b42511af46" containerName="openstack-network-exporter" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043126 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8d3670-41c0-4649-8a2f-38b090638cac" containerName="nova-scheduler-scheduler" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043146 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb40d40-7926-424a-810d-3b6f77e1022f" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043166 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e84192-2873-4f45-855d-d755d99e7946" containerName="nova-cell0-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043189 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-reaper" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043203 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7722b5d-ba92-4332-93c7-bc3aa9bfdb33" containerName="nova-cell1-conductor-conductor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043220 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043242 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc7c949-b993-484e-8e07-778a72743679" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043260 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="rsync" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043281 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-expirer" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043294 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043306 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-notification-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043322 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043378 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfa1d66-684f-43de-b751-1da2399d48ee" containerName="keystone-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043400 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="sg-core" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043424 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043443 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043459 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f891af55-ec46-4261-9f5e-01a1c181f194" containerName="ceilometer-central-agent" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043474 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043489 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043508 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043527 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="account-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043543 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-auditor" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043564 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f70f50-3d66-4917-bfe2-1084a55e4eb9" containerName="rabbitmq" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043582 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="swift-recon-cron" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043601 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043619 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043639 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="45545422-414a-433a-9de9-fbfb6e03add3" containerName="glance-httpd" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043651 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-updater" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043694 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="container-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043711 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b37865c-22cd-4288-b47b-ef9ef1f33646" containerName="galera" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043730 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d547505a-34d0-4645-9690-74df58728a46" containerName="placement-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043766 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043781 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a8ed76-43f2-4997-9bd6-83f94fb3b7b6" containerName="nova-metadata-metadata" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043800 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fc2d9b-17e4-42b0-bcee-065a237b513c" containerName="cinder-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043833 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="63916786-c676-4695-84a1-3d3be685de16" containerName="glance-log" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043853 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2caca0fd-0f3f-4725-a196-04463abed671" containerName="proxy-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043870 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b445f12-f3bf-41d9-91f9-56def2b2694b" containerName="barbican-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043890 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="740cac4e-ecd7-4752-9d29-4adb1a14577b" containerName="nova-api-api" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043909 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e544fa30-133c-4728-a8c5-99084bcb4367" containerName="registry-server" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.043927 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc34290-d23e-4d76-a6ea-dd2f4b1d9a0b" containerName="object-replicator" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.045908 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.062069 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.111977 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.112311 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkww\" (UniqueName: \"kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.112369 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.213277 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.213385 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.213412 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfkww\" (UniqueName: \"kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.214064 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.214264 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.234881 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfkww\" (UniqueName: \"kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww\") pod \"redhat-operators-t6rbt\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.411893 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.907849 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:24:59 crc kubenswrapper[4893]: I0121 07:24:59.953833 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerStarted","Data":"d2725140fafae28203568999350a01be793eff3b6bd4b0bd81e80d59d9985a85"} Jan 21 07:25:00 crc kubenswrapper[4893]: I0121 07:25:00.982133 4893 generic.go:334] "Generic (PLEG): container finished" podID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerID="c99195787085cf284641d5c321283ec1611abfc602d1f97eaf57095dbdbbd792" exitCode=0 Jan 21 07:25:00 crc kubenswrapper[4893]: I0121 07:25:00.982257 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerDied","Data":"c99195787085cf284641d5c321283ec1611abfc602d1f97eaf57095dbdbbd792"} Jan 21 07:25:03 crc kubenswrapper[4893]: I0121 07:25:03.008329 4893 generic.go:334] "Generic (PLEG): container finished" podID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerID="c61c1c3a8d44bf10e10dc7b495b1fd24e60ad1841beffab85b547ee46d2f708a" exitCode=0 Jan 21 07:25:03 crc kubenswrapper[4893]: I0121 07:25:03.008447 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerDied","Data":"c61c1c3a8d44bf10e10dc7b495b1fd24e60ad1841beffab85b547ee46d2f708a"} Jan 21 07:25:04 crc kubenswrapper[4893]: I0121 07:25:04.021511 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerStarted","Data":"cfb917a1cf01efe9486986647d24fc449d3cc5d6ce25a9bd1bbfd0c8cd44bdb2"} Jan 21 07:25:04 crc kubenswrapper[4893]: I0121 07:25:04.045958 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6rbt" podStartSLOduration=2.564275396 podStartE2EDuration="5.04592595s" podCreationTimestamp="2026-01-21 07:24:59 +0000 UTC" firstStartedPulling="2026-01-21 07:25:00.983851144 +0000 UTC m=+1842.214197056" lastFinishedPulling="2026-01-21 07:25:03.465501668 +0000 UTC m=+1844.695847610" observedRunningTime="2026-01-21 07:25:04.040931116 +0000 UTC m=+1845.271277028" watchObservedRunningTime="2026-01-21 07:25:04.04592595 +0000 UTC m=+1845.276271872" Jan 21 07:25:09 crc kubenswrapper[4893]: I0121 07:25:09.412685 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:09 crc kubenswrapper[4893]: I0121 07:25:09.413050 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:10 crc kubenswrapper[4893]: I0121 07:25:10.458025 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6rbt" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="registry-server" probeResult="failure" output=< Jan 21 07:25:10 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:25:10 crc kubenswrapper[4893]: > Jan 21 07:25:10 crc kubenswrapper[4893]: I0121 07:25:10.581372 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:25:10 crc kubenswrapper[4893]: E0121 07:25:10.581821 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:25:19 crc kubenswrapper[4893]: I0121 07:25:19.480553 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:19 crc kubenswrapper[4893]: I0121 07:25:19.566159 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:20 crc kubenswrapper[4893]: I0121 07:25:20.947917 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:20 crc kubenswrapper[4893]: I0121 07:25:20.950098 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:20 crc kubenswrapper[4893]: I0121 07:25:20.980879 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.047690 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.047796 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.047875 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.148956 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.149030 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.149093 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.149574 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.149876 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.176217 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh\") pod \"community-operators-t29g6\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.305353 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.536070 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.536282 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6rbt" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="registry-server" containerID="cri-o://cfb917a1cf01efe9486986647d24fc449d3cc5d6ce25a9bd1bbfd0c8cd44bdb2" gracePeriod=2 Jan 21 07:25:21 crc kubenswrapper[4893]: I0121 07:25:21.548244 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.181979 4893 generic.go:334] "Generic (PLEG): container finished" podID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerID="57ccc2c7ea4dc4f11b24f82daede1f6656cc3f7ae0a826170290748d7c53e457" exitCode=0 Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.182051 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerDied","Data":"57ccc2c7ea4dc4f11b24f82daede1f6656cc3f7ae0a826170290748d7c53e457"} Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.182079 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerStarted","Data":"e10567387f2745e6f78ce0b60ed6f32938410edaa9c695d0ef286cf3c769b39d"} Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.186133 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.212387 4893 generic.go:334] "Generic (PLEG): container finished" podID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerID="cfb917a1cf01efe9486986647d24fc449d3cc5d6ce25a9bd1bbfd0c8cd44bdb2" exitCode=0 Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.212427 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerDied","Data":"cfb917a1cf01efe9486986647d24fc449d3cc5d6ce25a9bd1bbfd0c8cd44bdb2"} Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.482763 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.613709 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfkww\" (UniqueName: \"kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww\") pod \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.613769 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content\") pod \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.613837 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities\") pod \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\" (UID: \"81119e5c-67b3-4aa2-a05b-4ee3c2949588\") " Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.614654 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities" (OuterVolumeSpecName: "utilities") pod "81119e5c-67b3-4aa2-a05b-4ee3c2949588" (UID: "81119e5c-67b3-4aa2-a05b-4ee3c2949588"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.624358 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww" (OuterVolumeSpecName: "kube-api-access-dfkww") pod "81119e5c-67b3-4aa2-a05b-4ee3c2949588" (UID: "81119e5c-67b3-4aa2-a05b-4ee3c2949588"). InnerVolumeSpecName "kube-api-access-dfkww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.716085 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.716154 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfkww\" (UniqueName: \"kubernetes.io/projected/81119e5c-67b3-4aa2-a05b-4ee3c2949588-kube-api-access-dfkww\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.773929 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81119e5c-67b3-4aa2-a05b-4ee3c2949588" (UID: "81119e5c-67b3-4aa2-a05b-4ee3c2949588"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:25:22 crc kubenswrapper[4893]: I0121 07:25:22.818040 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81119e5c-67b3-4aa2-a05b-4ee3c2949588-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.223785 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6rbt" event={"ID":"81119e5c-67b3-4aa2-a05b-4ee3c2949588","Type":"ContainerDied","Data":"d2725140fafae28203568999350a01be793eff3b6bd4b0bd81e80d59d9985a85"} Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.223837 4893 scope.go:117] "RemoveContainer" containerID="cfb917a1cf01efe9486986647d24fc449d3cc5d6ce25a9bd1bbfd0c8cd44bdb2" Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.223941 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6rbt" Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.265918 4893 scope.go:117] "RemoveContainer" containerID="c61c1c3a8d44bf10e10dc7b495b1fd24e60ad1841beffab85b547ee46d2f708a" Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.271791 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.285378 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6rbt"] Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.303005 4893 scope.go:117] "RemoveContainer" containerID="c99195787085cf284641d5c321283ec1611abfc602d1f97eaf57095dbdbbd792" Jan 21 07:25:23 crc kubenswrapper[4893]: I0121 07:25:23.595036 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" path="/var/lib/kubelet/pods/81119e5c-67b3-4aa2-a05b-4ee3c2949588/volumes" Jan 21 07:25:24 crc kubenswrapper[4893]: I0121 07:25:24.237749 4893 generic.go:334] "Generic (PLEG): container finished" podID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerID="e97180dcb645f6e37a11c49c2d370706bee939ab70b1fc192275a59bf6814223" exitCode=0 Jan 21 07:25:24 crc kubenswrapper[4893]: I0121 07:25:24.237816 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerDied","Data":"e97180dcb645f6e37a11c49c2d370706bee939ab70b1fc192275a59bf6814223"} Jan 21 07:25:24 crc kubenswrapper[4893]: I0121 07:25:24.582483 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:25:24 crc kubenswrapper[4893]: E0121 07:25:24.582970 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:25:25 crc kubenswrapper[4893]: I0121 07:25:25.252878 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerStarted","Data":"c8979694aacb8196dc64856e2565b0714b792ac75d3efe99e104a1847b469494"} Jan 21 07:25:25 crc kubenswrapper[4893]: I0121 07:25:25.285251 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t29g6" podStartSLOduration=2.600794341 podStartE2EDuration="5.285221087s" podCreationTimestamp="2026-01-21 07:25:20 +0000 UTC" firstStartedPulling="2026-01-21 07:25:22.185881426 +0000 UTC m=+1863.416227328" lastFinishedPulling="2026-01-21 07:25:24.870308162 +0000 UTC m=+1866.100654074" observedRunningTime="2026-01-21 07:25:25.276750323 +0000 UTC m=+1866.507096275" watchObservedRunningTime="2026-01-21 07:25:25.285221087 +0000 UTC m=+1866.515567009" Jan 21 07:25:31 crc kubenswrapper[4893]: I0121 07:25:31.306145 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:31 crc kubenswrapper[4893]: I0121 07:25:31.306654 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:31 crc kubenswrapper[4893]: I0121 07:25:31.347433 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:32 crc kubenswrapper[4893]: I0121 07:25:32.427479 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:32 crc kubenswrapper[4893]: I0121 07:25:32.472371 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:33 crc kubenswrapper[4893]: I0121 07:25:33.674713 4893 scope.go:117] "RemoveContainer" containerID="60f688166eda4d1cccb657d68359251422e6de06cde57e1ef44e96fed4628bcf" Jan 21 07:25:33 crc kubenswrapper[4893]: I0121 07:25:33.700590 4893 scope.go:117] "RemoveContainer" containerID="eb9bfa7e6d2c3f6e676c90c70098ed1b209a57c6c641ebe5b08bfb896a4460b3" Jan 21 07:25:33 crc kubenswrapper[4893]: I0121 07:25:33.760405 4893 scope.go:117] "RemoveContainer" containerID="38c42abbce1014081a8dc5529864400d15af55a1a974c937c6bed58207f6722d" Jan 21 07:25:34 crc kubenswrapper[4893]: I0121 07:25:34.336587 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t29g6" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="registry-server" containerID="cri-o://c8979694aacb8196dc64856e2565b0714b792ac75d3efe99e104a1847b469494" gracePeriod=2 Jan 21 07:25:35 crc kubenswrapper[4893]: I0121 07:25:35.350359 4893 generic.go:334] "Generic (PLEG): container finished" podID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerID="c8979694aacb8196dc64856e2565b0714b792ac75d3efe99e104a1847b469494" exitCode=0 Jan 21 07:25:35 crc kubenswrapper[4893]: I0121 07:25:35.350432 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerDied","Data":"c8979694aacb8196dc64856e2565b0714b792ac75d3efe99e104a1847b469494"} Jan 21 07:25:35 crc kubenswrapper[4893]: I0121 07:25:35.882478 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.015259 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities\") pod \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.015421 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh\") pod \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.015639 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content\") pod \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\" (UID: \"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8\") " Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.016419 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities" (OuterVolumeSpecName: "utilities") pod "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" (UID: "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.023779 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh" (OuterVolumeSpecName: "kube-api-access-xf4kh") pod "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" (UID: "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8"). InnerVolumeSpecName "kube-api-access-xf4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.089223 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" (UID: "db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.118286 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.118341 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.118356 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4kh\" (UniqueName: \"kubernetes.io/projected/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8-kube-api-access-xf4kh\") on node \"crc\" DevicePath \"\"" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.366161 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t29g6" event={"ID":"db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8","Type":"ContainerDied","Data":"e10567387f2745e6f78ce0b60ed6f32938410edaa9c695d0ef286cf3c769b39d"} Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.366386 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t29g6" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.366420 4893 scope.go:117] "RemoveContainer" containerID="c8979694aacb8196dc64856e2565b0714b792ac75d3efe99e104a1847b469494" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.422065 4893 scope.go:117] "RemoveContainer" containerID="e97180dcb645f6e37a11c49c2d370706bee939ab70b1fc192275a59bf6814223" Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.427702 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.435756 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t29g6"] Jan 21 07:25:36 crc kubenswrapper[4893]: I0121 07:25:36.452260 4893 scope.go:117] "RemoveContainer" containerID="57ccc2c7ea4dc4f11b24f82daede1f6656cc3f7ae0a826170290748d7c53e457" Jan 21 07:25:37 crc kubenswrapper[4893]: I0121 07:25:37.598562 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" path="/var/lib/kubelet/pods/db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8/volumes" Jan 21 07:25:39 crc kubenswrapper[4893]: I0121 07:25:39.585510 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:25:39 crc kubenswrapper[4893]: E0121 07:25:39.585928 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:25:52 crc kubenswrapper[4893]: I0121 07:25:52.580871 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:25:52 crc kubenswrapper[4893]: E0121 07:25:52.581561 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:26:07 crc kubenswrapper[4893]: I0121 07:26:07.581739 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:26:07 crc kubenswrapper[4893]: E0121 07:26:07.582559 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:26:19 crc kubenswrapper[4893]: I0121 07:26:19.595181 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:26:19 crc kubenswrapper[4893]: E0121 07:26:19.596593 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:26:33 crc kubenswrapper[4893]: I0121 07:26:33.835992 4893 scope.go:117] "RemoveContainer" containerID="eab6c0154d8e24cc89497e5f14cf2a2d8e6fbe8bbe8e3773a8b5ef15313adcf2" Jan 21 07:26:33 crc kubenswrapper[4893]: I0121 07:26:33.887851 4893 scope.go:117] "RemoveContainer" containerID="1d916db55c52fdd79903dfcda989284aff74e0a176d6b57776961e238500c55c" Jan 21 07:26:33 crc kubenswrapper[4893]: I0121 07:26:33.944554 4893 scope.go:117] "RemoveContainer" containerID="df3f218713a1a3e87462926b70814e835ae16e43f96d8ac34ef178ec92248ed4" Jan 21 07:26:33 crc kubenswrapper[4893]: I0121 07:26:33.991745 4893 scope.go:117] "RemoveContainer" containerID="2cc9725ab12cd661bd2f547af67d3fedddf8c443202b9a6f9a62d9b0fcde6149" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.019090 4893 scope.go:117] "RemoveContainer" containerID="42348ea69c139f7e2e81e332a59dfd8cad34e7569e58a9e0d74abfe81e742780" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.084703 4893 scope.go:117] "RemoveContainer" containerID="34ec56016ccbe6bf1d879400f1a064708e3bd82739bb36206e2a0d44a5e8618c" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.155216 4893 scope.go:117] "RemoveContainer" containerID="a5f5e6d134a5ef20549d287142a86ef1c8bc06298be5d15521ca276897db55d7" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.210257 4893 scope.go:117] "RemoveContainer" containerID="e18a345419c29dccee9610df29465e9c2c633cf97f34213f1d0002603db57da4" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.243134 4893 scope.go:117] "RemoveContainer" containerID="3cf08623b3d4ea248b91b7f0ba462c043519e8b8478a0fd43d26a24fe8b82d50" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.270935 4893 scope.go:117] "RemoveContainer" containerID="66ea28ad52999a6b6a22c95a9e03aa8010e494c9ef28ed1353dec5ea9b2e0e67" Jan 21 07:26:34 crc kubenswrapper[4893]: I0121 07:26:34.581153 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:26:34 crc kubenswrapper[4893]: E0121 07:26:34.581538 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:26:49 crc kubenswrapper[4893]: I0121 07:26:49.591772 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:26:49 crc kubenswrapper[4893]: E0121 07:26:49.593128 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:27:00 crc kubenswrapper[4893]: I0121 07:27:00.581081 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:27:01 crc kubenswrapper[4893]: I0121 07:27:01.196747 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be"} Jan 21 07:27:34 crc kubenswrapper[4893]: I0121 07:27:34.525160 4893 scope.go:117] "RemoveContainer" containerID="88db4e29765c55f1a72698cb0b6f216ca1cb80f3ab802906082f781183257c89" Jan 21 07:27:34 crc kubenswrapper[4893]: I0121 07:27:34.608999 4893 scope.go:117] "RemoveContainer" containerID="2d1941830623140962da37a4e0f34493c6b7e19795a6d3b41ff47906cfda9a51" Jan 21 07:27:34 crc kubenswrapper[4893]: I0121 07:27:34.638951 4893 scope.go:117] "RemoveContainer" containerID="464e83f2ee6ddc904cb0c2f30a2e9e9ff5e51b54b0dc10182aa679870dcce8ad" Jan 21 07:27:34 crc kubenswrapper[4893]: I0121 07:27:34.664503 4893 scope.go:117] "RemoveContainer" containerID="2c2c02392de0b0a37af88f2c56d0d651f8087c548af9e04ca796d743db6bb733" Jan 21 07:29:28 crc kubenswrapper[4893]: I0121 07:29:28.657367 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:29:28 crc kubenswrapper[4893]: I0121 07:29:28.658098 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:29:58 crc kubenswrapper[4893]: I0121 07:29:58.657115 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:29:58 crc kubenswrapper[4893]: I0121 07:29:58.658097 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.282666 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz"] Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283315 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283330 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283341 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="extract-utilities" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283348 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="extract-utilities" Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283360 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283366 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283388 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="extract-utilities" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283395 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="extract-utilities" Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283419 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="extract-content" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283425 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="extract-content" Jan 21 07:30:00 crc kubenswrapper[4893]: E0121 07:30:00.283440 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="extract-content" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283446 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="extract-content" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283585 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="db03fe3c-2ba6-451e-a2a0-0e6c9be9cda8" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.283599 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="81119e5c-67b3-4aa2-a05b-4ee3c2949588" containerName="registry-server" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.284187 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.289454 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.295245 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.300266 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz"] Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.462293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.463174 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxg28\" (UniqueName: \"kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.463342 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.565844 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.566034 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxg28\" (UniqueName: \"kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.566096 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.568196 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.593702 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.599371 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxg28\" (UniqueName: \"kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28\") pod \"collect-profiles-29483010-fcwzz\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:00 crc kubenswrapper[4893]: I0121 07:30:00.609612 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:01 crc kubenswrapper[4893]: I0121 07:30:01.121730 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz"] Jan 21 07:30:01 crc kubenswrapper[4893]: I0121 07:30:01.386441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" event={"ID":"0eb41897-eaa1-496a-b279-580efc5f77c3","Type":"ContainerStarted","Data":"71318b9203c1a6d1de137fd82f3cc93394e06fe92783fca133533b72677f87d4"} Jan 21 07:30:01 crc kubenswrapper[4893]: I0121 07:30:01.386498 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" event={"ID":"0eb41897-eaa1-496a-b279-580efc5f77c3","Type":"ContainerStarted","Data":"e56c722bb898f5e5cf3c964d4f0dfdad90f89521086cd3401bcaa8104fc6762e"} Jan 21 07:30:02 crc kubenswrapper[4893]: I0121 07:30:02.395418 4893 generic.go:334] "Generic (PLEG): container finished" podID="0eb41897-eaa1-496a-b279-580efc5f77c3" containerID="71318b9203c1a6d1de137fd82f3cc93394e06fe92783fca133533b72677f87d4" exitCode=0 Jan 21 07:30:02 crc kubenswrapper[4893]: I0121 07:30:02.395512 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" event={"ID":"0eb41897-eaa1-496a-b279-580efc5f77c3","Type":"ContainerDied","Data":"71318b9203c1a6d1de137fd82f3cc93394e06fe92783fca133533b72677f87d4"} Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.773208 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.887372 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume\") pod \"0eb41897-eaa1-496a-b279-580efc5f77c3\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.887469 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume\") pod \"0eb41897-eaa1-496a-b279-580efc5f77c3\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.887569 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxg28\" (UniqueName: \"kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28\") pod \"0eb41897-eaa1-496a-b279-580efc5f77c3\" (UID: \"0eb41897-eaa1-496a-b279-580efc5f77c3\") " Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.889039 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0eb41897-eaa1-496a-b279-580efc5f77c3" (UID: "0eb41897-eaa1-496a-b279-580efc5f77c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.894530 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0eb41897-eaa1-496a-b279-580efc5f77c3" (UID: "0eb41897-eaa1-496a-b279-580efc5f77c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.894660 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28" (OuterVolumeSpecName: "kube-api-access-dxg28") pod "0eb41897-eaa1-496a-b279-580efc5f77c3" (UID: "0eb41897-eaa1-496a-b279-580efc5f77c3"). InnerVolumeSpecName "kube-api-access-dxg28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.989554 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eb41897-eaa1-496a-b279-580efc5f77c3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.989607 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eb41897-eaa1-496a-b279-580efc5f77c3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:30:03 crc kubenswrapper[4893]: I0121 07:30:03.989623 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxg28\" (UniqueName: \"kubernetes.io/projected/0eb41897-eaa1-496a-b279-580efc5f77c3-kube-api-access-dxg28\") on node \"crc\" DevicePath \"\"" Jan 21 07:30:04 crc kubenswrapper[4893]: I0121 07:30:04.418908 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" event={"ID":"0eb41897-eaa1-496a-b279-580efc5f77c3","Type":"ContainerDied","Data":"e56c722bb898f5e5cf3c964d4f0dfdad90f89521086cd3401bcaa8104fc6762e"} Jan 21 07:30:04 crc kubenswrapper[4893]: I0121 07:30:04.418971 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56c722bb898f5e5cf3c964d4f0dfdad90f89521086cd3401bcaa8104fc6762e" Jan 21 07:30:04 crc kubenswrapper[4893]: I0121 07:30:04.418979 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483010-fcwzz" Jan 21 07:30:04 crc kubenswrapper[4893]: I0121 07:30:04.516945 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2"] Jan 21 07:30:04 crc kubenswrapper[4893]: I0121 07:30:04.524561 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-hgsf2"] Jan 21 07:30:05 crc kubenswrapper[4893]: I0121 07:30:05.599648 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff046ea4-caba-480a-8242-eb099a1f136e" path="/var/lib/kubelet/pods/ff046ea4-caba-480a-8242-eb099a1f136e/volumes" Jan 21 07:30:28 crc kubenswrapper[4893]: I0121 07:30:28.657656 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:30:28 crc kubenswrapper[4893]: I0121 07:30:28.658619 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:30:28 crc kubenswrapper[4893]: I0121 07:30:28.658748 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:30:28 crc kubenswrapper[4893]: I0121 07:30:28.659901 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:30:28 crc kubenswrapper[4893]: I0121 07:30:28.659981 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be" gracePeriod=600 Jan 21 07:30:29 crc kubenswrapper[4893]: I0121 07:30:29.809049 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be" exitCode=0 Jan 21 07:30:29 crc kubenswrapper[4893]: I0121 07:30:29.809202 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be"} Jan 21 07:30:29 crc kubenswrapper[4893]: I0121 07:30:29.809509 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed"} Jan 21 07:30:29 crc kubenswrapper[4893]: I0121 07:30:29.809549 4893 scope.go:117] "RemoveContainer" containerID="325a0207fb4c2ec1fa7e041a8980c7916a769a35b943f6d62d67be9f953dbe2f" Jan 21 07:30:34 crc kubenswrapper[4893]: I0121 07:30:34.817200 4893 scope.go:117] "RemoveContainer" containerID="80ee5b060c65bd0ed034f8bd385b55c48a441360bde3e6494a12853c1a275ff2" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.415623 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:37 crc kubenswrapper[4893]: E0121 07:31:37.419151 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb41897-eaa1-496a-b279-580efc5f77c3" containerName="collect-profiles" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.419202 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb41897-eaa1-496a-b279-580efc5f77c3" containerName="collect-profiles" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.419505 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb41897-eaa1-496a-b279-580efc5f77c3" containerName="collect-profiles" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.421608 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.446140 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.536058 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.536131 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.536182 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglt4\" (UniqueName: \"kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.637322 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.637603 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.637683 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nglt4\" (UniqueName: \"kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.638072 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.638077 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.657237 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nglt4\" (UniqueName: \"kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4\") pod \"redhat-marketplace-zwzq8\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:37 crc kubenswrapper[4893]: I0121 07:31:37.763864 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:38 crc kubenswrapper[4893]: I0121 07:31:38.226575 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:38 crc kubenswrapper[4893]: I0121 07:31:38.552300 4893 generic.go:334] "Generic (PLEG): container finished" podID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerID="86d96e565e96370afb6cfcec560b65fadd013bc59e1d02499a0d7d7ba3e8160d" exitCode=0 Jan 21 07:31:38 crc kubenswrapper[4893]: I0121 07:31:38.552362 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerDied","Data":"86d96e565e96370afb6cfcec560b65fadd013bc59e1d02499a0d7d7ba3e8160d"} Jan 21 07:31:38 crc kubenswrapper[4893]: I0121 07:31:38.552391 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerStarted","Data":"28e672e08bdf5d923e874c713f643a2986b21e7134cc02edf449c1421799baa4"} Jan 21 07:31:38 crc kubenswrapper[4893]: I0121 07:31:38.555050 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:31:39 crc kubenswrapper[4893]: I0121 07:31:39.562537 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerStarted","Data":"17efca45f449854c936b60f07c938e6e0227b526d7add5d61f999c1bfda2cfc5"} Jan 21 07:31:40 crc kubenswrapper[4893]: I0121 07:31:40.575031 4893 generic.go:334] "Generic (PLEG): container finished" podID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerID="17efca45f449854c936b60f07c938e6e0227b526d7add5d61f999c1bfda2cfc5" exitCode=0 Jan 21 07:31:40 crc kubenswrapper[4893]: I0121 07:31:40.575108 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerDied","Data":"17efca45f449854c936b60f07c938e6e0227b526d7add5d61f999c1bfda2cfc5"} Jan 21 07:31:41 crc kubenswrapper[4893]: I0121 07:31:41.591323 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerStarted","Data":"8b5d434f73e13de49277e66d7220f2face3d915d230d6de6f0a0e56ad39c3588"} Jan 21 07:31:41 crc kubenswrapper[4893]: I0121 07:31:41.621244 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwzq8" podStartSLOduration=2.152824898 podStartE2EDuration="4.621196828s" podCreationTimestamp="2026-01-21 07:31:37 +0000 UTC" firstStartedPulling="2026-01-21 07:31:38.554704767 +0000 UTC m=+2239.785050669" lastFinishedPulling="2026-01-21 07:31:41.023076667 +0000 UTC m=+2242.253422599" observedRunningTime="2026-01-21 07:31:41.611351365 +0000 UTC m=+2242.841697297" watchObservedRunningTime="2026-01-21 07:31:41.621196828 +0000 UTC m=+2242.851542750" Jan 21 07:31:47 crc kubenswrapper[4893]: I0121 07:31:47.765003 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:47 crc kubenswrapper[4893]: I0121 07:31:47.766245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:47 crc kubenswrapper[4893]: I0121 07:31:47.849758 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:48 crc kubenswrapper[4893]: I0121 07:31:48.712850 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:48 crc kubenswrapper[4893]: I0121 07:31:48.813373 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:50 crc kubenswrapper[4893]: I0121 07:31:50.685152 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwzq8" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="registry-server" containerID="cri-o://8b5d434f73e13de49277e66d7220f2face3d915d230d6de6f0a0e56ad39c3588" gracePeriod=2 Jan 21 07:31:51 crc kubenswrapper[4893]: I0121 07:31:51.850417 4893 generic.go:334] "Generic (PLEG): container finished" podID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerID="8b5d434f73e13de49277e66d7220f2face3d915d230d6de6f0a0e56ad39c3588" exitCode=0 Jan 21 07:31:51 crc kubenswrapper[4893]: I0121 07:31:51.850766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerDied","Data":"8b5d434f73e13de49277e66d7220f2face3d915d230d6de6f0a0e56ad39c3588"} Jan 21 07:31:51 crc kubenswrapper[4893]: I0121 07:31:51.943809 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.143125 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nglt4\" (UniqueName: \"kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4\") pod \"08b103e1-e85c-462b-9539-9a88c1a542e0\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.143180 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities\") pod \"08b103e1-e85c-462b-9539-9a88c1a542e0\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.143253 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content\") pod \"08b103e1-e85c-462b-9539-9a88c1a542e0\" (UID: \"08b103e1-e85c-462b-9539-9a88c1a542e0\") " Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.144403 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities" (OuterVolumeSpecName: "utilities") pod "08b103e1-e85c-462b-9539-9a88c1a542e0" (UID: "08b103e1-e85c-462b-9539-9a88c1a542e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.145605 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.154516 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4" (OuterVolumeSpecName: "kube-api-access-nglt4") pod "08b103e1-e85c-462b-9539-9a88c1a542e0" (UID: "08b103e1-e85c-462b-9539-9a88c1a542e0"). InnerVolumeSpecName "kube-api-access-nglt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.170372 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08b103e1-e85c-462b-9539-9a88c1a542e0" (UID: "08b103e1-e85c-462b-9539-9a88c1a542e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.246230 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08b103e1-e85c-462b-9539-9a88c1a542e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.246276 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nglt4\" (UniqueName: \"kubernetes.io/projected/08b103e1-e85c-462b-9539-9a88c1a542e0-kube-api-access-nglt4\") on node \"crc\" DevicePath \"\"" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.867918 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwzq8" event={"ID":"08b103e1-e85c-462b-9539-9a88c1a542e0","Type":"ContainerDied","Data":"28e672e08bdf5d923e874c713f643a2986b21e7134cc02edf449c1421799baa4"} Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.868007 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwzq8" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.868018 4893 scope.go:117] "RemoveContainer" containerID="8b5d434f73e13de49277e66d7220f2face3d915d230d6de6f0a0e56ad39c3588" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.914342 4893 scope.go:117] "RemoveContainer" containerID="17efca45f449854c936b60f07c938e6e0227b526d7add5d61f999c1bfda2cfc5" Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.915786 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.923119 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwzq8"] Jan 21 07:31:52 crc kubenswrapper[4893]: I0121 07:31:52.933880 4893 scope.go:117] "RemoveContainer" containerID="86d96e565e96370afb6cfcec560b65fadd013bc59e1d02499a0d7d7ba3e8160d" Jan 21 07:31:53 crc kubenswrapper[4893]: I0121 07:31:53.596394 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" path="/var/lib/kubelet/pods/08b103e1-e85c-462b-9539-9a88c1a542e0/volumes" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.363840 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:23 crc kubenswrapper[4893]: E0121 07:32:23.364845 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="registry-server" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.364863 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="registry-server" Jan 21 07:32:23 crc kubenswrapper[4893]: E0121 07:32:23.364883 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="extract-content" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.364891 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="extract-content" Jan 21 07:32:23 crc kubenswrapper[4893]: E0121 07:32:23.364903 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="extract-utilities" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.364911 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="extract-utilities" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.365091 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b103e1-e85c-462b-9539-9a88c1a542e0" containerName="registry-server" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.366405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.380099 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.491367 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.491458 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vk5\" (UniqueName: \"kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.491497 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.593630 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.593733 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vk5\" (UniqueName: \"kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.593803 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.594486 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.594537 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.614135 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vk5\" (UniqueName: \"kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5\") pod \"certified-operators-4pcvm\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:23 crc kubenswrapper[4893]: I0121 07:32:23.703696 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:24 crc kubenswrapper[4893]: I0121 07:32:24.190031 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:25 crc kubenswrapper[4893]: I0121 07:32:25.179409 4893 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0779-df8a-4102-b429-92be7b796d53" containerID="0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072" exitCode=0 Jan 21 07:32:25 crc kubenswrapper[4893]: I0121 07:32:25.179525 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerDied","Data":"0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072"} Jan 21 07:32:25 crc kubenswrapper[4893]: I0121 07:32:25.179884 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerStarted","Data":"6647a4a90b3992de98010d1d46600873b3681a04e8399373aacbd0e23532e65d"} Jan 21 07:32:27 crc kubenswrapper[4893]: I0121 07:32:27.203133 4893 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0779-df8a-4102-b429-92be7b796d53" containerID="a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905" exitCode=0 Jan 21 07:32:27 crc kubenswrapper[4893]: I0121 07:32:27.203210 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerDied","Data":"a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905"} Jan 21 07:32:28 crc kubenswrapper[4893]: I0121 07:32:28.218563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerStarted","Data":"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0"} Jan 21 07:32:33 crc kubenswrapper[4893]: I0121 07:32:33.704606 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:33 crc kubenswrapper[4893]: I0121 07:32:33.705240 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:33 crc kubenswrapper[4893]: I0121 07:32:33.752436 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:33 crc kubenswrapper[4893]: I0121 07:32:33.775202 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4pcvm" podStartSLOduration=8.323146396 podStartE2EDuration="10.775181704s" podCreationTimestamp="2026-01-21 07:32:23 +0000 UTC" firstStartedPulling="2026-01-21 07:32:25.182078248 +0000 UTC m=+2286.412424140" lastFinishedPulling="2026-01-21 07:32:27.634113546 +0000 UTC m=+2288.864459448" observedRunningTime="2026-01-21 07:32:28.264868206 +0000 UTC m=+2289.495214148" watchObservedRunningTime="2026-01-21 07:32:33.775181704 +0000 UTC m=+2295.005527606" Jan 21 07:32:34 crc kubenswrapper[4893]: I0121 07:32:34.418170 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:34 crc kubenswrapper[4893]: I0121 07:32:34.466802 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.295591 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4pcvm" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="registry-server" containerID="cri-o://065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0" gracePeriod=2 Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.739956 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.865016 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities\") pod \"3f7e0779-df8a-4102-b429-92be7b796d53\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.865121 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vk5\" (UniqueName: \"kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5\") pod \"3f7e0779-df8a-4102-b429-92be7b796d53\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.865310 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content\") pod \"3f7e0779-df8a-4102-b429-92be7b796d53\" (UID: \"3f7e0779-df8a-4102-b429-92be7b796d53\") " Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.866382 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities" (OuterVolumeSpecName: "utilities") pod "3f7e0779-df8a-4102-b429-92be7b796d53" (UID: "3f7e0779-df8a-4102-b429-92be7b796d53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.873944 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5" (OuterVolumeSpecName: "kube-api-access-92vk5") pod "3f7e0779-df8a-4102-b429-92be7b796d53" (UID: "3f7e0779-df8a-4102-b429-92be7b796d53"). InnerVolumeSpecName "kube-api-access-92vk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.927953 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f7e0779-df8a-4102-b429-92be7b796d53" (UID: "3f7e0779-df8a-4102-b429-92be7b796d53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.966838 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92vk5\" (UniqueName: \"kubernetes.io/projected/3f7e0779-df8a-4102-b429-92be7b796d53-kube-api-access-92vk5\") on node \"crc\" DevicePath \"\"" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.966881 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:32:36 crc kubenswrapper[4893]: I0121 07:32:36.966894 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0779-df8a-4102-b429-92be7b796d53-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.316530 4893 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0779-df8a-4102-b429-92be7b796d53" containerID="065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0" exitCode=0 Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.316600 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pcvm" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.316622 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerDied","Data":"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0"} Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.317082 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pcvm" event={"ID":"3f7e0779-df8a-4102-b429-92be7b796d53","Type":"ContainerDied","Data":"6647a4a90b3992de98010d1d46600873b3681a04e8399373aacbd0e23532e65d"} Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.317106 4893 scope.go:117] "RemoveContainer" containerID="065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.341693 4893 scope.go:117] "RemoveContainer" containerID="a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.364228 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.367886 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4pcvm"] Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.371649 4893 scope.go:117] "RemoveContainer" containerID="0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.394726 4893 scope.go:117] "RemoveContainer" containerID="065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0" Jan 21 07:32:37 crc kubenswrapper[4893]: E0121 07:32:37.395552 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0\": container with ID starting with 065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0 not found: ID does not exist" containerID="065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.395629 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0"} err="failed to get container status \"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0\": rpc error: code = NotFound desc = could not find container \"065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0\": container with ID starting with 065da3f659b074c78a6032bf7cc66c94ae14f721d50cff806dabb56876ee9be0 not found: ID does not exist" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.395685 4893 scope.go:117] "RemoveContainer" containerID="a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905" Jan 21 07:32:37 crc kubenswrapper[4893]: E0121 07:32:37.396122 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905\": container with ID starting with a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905 not found: ID does not exist" containerID="a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.396183 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905"} err="failed to get container status \"a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905\": rpc error: code = NotFound desc = could not find container \"a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905\": container with ID starting with a9dfd4920131dffe1c194c1231ee7db0bb77965a58973b213cb0cb91682a1905 not found: ID does not exist" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.396224 4893 scope.go:117] "RemoveContainer" containerID="0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072" Jan 21 07:32:37 crc kubenswrapper[4893]: E0121 07:32:37.396623 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072\": container with ID starting with 0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072 not found: ID does not exist" containerID="0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.396692 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072"} err="failed to get container status \"0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072\": rpc error: code = NotFound desc = could not find container \"0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072\": container with ID starting with 0eb07b763bcc367a8630cd69254e5c154401379757d02863940bf7f10caf0072 not found: ID does not exist" Jan 21 07:32:37 crc kubenswrapper[4893]: I0121 07:32:37.782172 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" path="/var/lib/kubelet/pods/3f7e0779-df8a-4102-b429-92be7b796d53/volumes" Jan 21 07:32:58 crc kubenswrapper[4893]: I0121 07:32:58.657127 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:32:58 crc kubenswrapper[4893]: I0121 07:32:58.657863 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:33:28 crc kubenswrapper[4893]: I0121 07:33:28.657697 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:33:28 crc kubenswrapper[4893]: I0121 07:33:28.658368 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.656304 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.657016 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.657085 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.657815 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.657896 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" gracePeriod=600 Jan 21 07:33:58 crc kubenswrapper[4893]: E0121 07:33:58.810730 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.825963 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" exitCode=0 Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.826006 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed"} Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.826057 4893 scope.go:117] "RemoveContainer" containerID="c2d89987c6c9a018b5c0e038101aa69c24ba2324a0e108803064a8a93468c6be" Jan 21 07:33:58 crc kubenswrapper[4893]: I0121 07:33:58.826700 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:33:58 crc kubenswrapper[4893]: E0121 07:33:58.827022 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:34:11 crc kubenswrapper[4893]: I0121 07:34:11.580967 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:34:11 crc kubenswrapper[4893]: E0121 07:34:11.582126 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:34:22 crc kubenswrapper[4893]: I0121 07:34:22.580607 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:34:22 crc kubenswrapper[4893]: E0121 07:34:22.581336 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:34:34 crc kubenswrapper[4893]: I0121 07:34:34.580790 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:34:34 crc kubenswrapper[4893]: E0121 07:34:34.581906 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:34:47 crc kubenswrapper[4893]: I0121 07:34:47.582261 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:34:47 crc kubenswrapper[4893]: E0121 07:34:47.583333 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:35:01 crc kubenswrapper[4893]: I0121 07:35:01.581569 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:35:01 crc kubenswrapper[4893]: E0121 07:35:01.582717 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:35:14 crc kubenswrapper[4893]: I0121 07:35:14.604245 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:35:14 crc kubenswrapper[4893]: E0121 07:35:14.605596 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.583095 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:22 crc kubenswrapper[4893]: E0121 07:35:22.584299 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="extract-content" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.584329 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="extract-content" Jan 21 07:35:22 crc kubenswrapper[4893]: E0121 07:35:22.584387 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="extract-utilities" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.584402 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="extract-utilities" Jan 21 07:35:22 crc kubenswrapper[4893]: E0121 07:35:22.584428 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="registry-server" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.584441 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="registry-server" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.584810 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7e0779-df8a-4102-b429-92be7b796d53" containerName="registry-server" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.587082 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.607747 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.657357 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vpx\" (UniqueName: \"kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.657535 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.657635 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.759486 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vpx\" (UniqueName: \"kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.759725 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.759790 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.760404 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.760622 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.796428 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vpx\" (UniqueName: \"kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx\") pod \"community-operators-qkrp4\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:22 crc kubenswrapper[4893]: I0121 07:35:22.921697 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:23 crc kubenswrapper[4893]: I0121 07:35:23.542330 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:24 crc kubenswrapper[4893]: I0121 07:35:24.053653 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerID="eb6826653ef9b00d335a539ac55babf9c1cf9170fab4497ba82a4eec0b84470d" exitCode=0 Jan 21 07:35:24 crc kubenswrapper[4893]: I0121 07:35:24.053761 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerDied","Data":"eb6826653ef9b00d335a539ac55babf9c1cf9170fab4497ba82a4eec0b84470d"} Jan 21 07:35:24 crc kubenswrapper[4893]: I0121 07:35:24.054136 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerStarted","Data":"798ca6a02c277403c738103c4582fc80413ab3cdcefb58aaffc05a7613b36d20"} Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.065171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerStarted","Data":"924ac7599bc58fb85bd154781471dcca1b0b53b443f194e9b09b74437a952549"} Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.168414 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.170428 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.180703 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.298384 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87g4\" (UniqueName: \"kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.298451 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.298602 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.399700 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87g4\" (UniqueName: \"kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.399966 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.400107 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.400665 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.400691 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.423318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87g4\" (UniqueName: \"kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4\") pod \"redhat-operators-srl75\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:25 crc kubenswrapper[4893]: I0121 07:35:25.496016 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:26 crc kubenswrapper[4893]: I0121 07:35:26.081918 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerID="924ac7599bc58fb85bd154781471dcca1b0b53b443f194e9b09b74437a952549" exitCode=0 Jan 21 07:35:26 crc kubenswrapper[4893]: I0121 07:35:26.082011 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerDied","Data":"924ac7599bc58fb85bd154781471dcca1b0b53b443f194e9b09b74437a952549"} Jan 21 07:35:26 crc kubenswrapper[4893]: I0121 07:35:26.114694 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:26 crc kubenswrapper[4893]: I0121 07:35:26.580486 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:35:26 crc kubenswrapper[4893]: E0121 07:35:26.580907 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:35:27 crc kubenswrapper[4893]: I0121 07:35:27.116966 4893 generic.go:334] "Generic (PLEG): container finished" podID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerID="54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c" exitCode=0 Jan 21 07:35:27 crc kubenswrapper[4893]: I0121 07:35:27.117070 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerDied","Data":"54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c"} Jan 21 07:35:27 crc kubenswrapper[4893]: I0121 07:35:27.117247 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerStarted","Data":"89dfdc7f6e0cf748521401e3a8cc4c35404f61ea08643d70ef074225b11cff18"} Jan 21 07:35:27 crc kubenswrapper[4893]: I0121 07:35:27.123610 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerStarted","Data":"599c80f6fe3dc5093b56b1eb994e034e24d6021d271966bfea72d54af08c3a9e"} Jan 21 07:35:27 crc kubenswrapper[4893]: I0121 07:35:27.178841 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qkrp4" podStartSLOduration=2.664159532 podStartE2EDuration="5.178813151s" podCreationTimestamp="2026-01-21 07:35:22 +0000 UTC" firstStartedPulling="2026-01-21 07:35:24.056760771 +0000 UTC m=+2465.287106703" lastFinishedPulling="2026-01-21 07:35:26.57141441 +0000 UTC m=+2467.801760322" observedRunningTime="2026-01-21 07:35:27.17735726 +0000 UTC m=+2468.407703162" watchObservedRunningTime="2026-01-21 07:35:27.178813151 +0000 UTC m=+2468.409159043" Jan 21 07:35:29 crc kubenswrapper[4893]: I0121 07:35:29.221964 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerStarted","Data":"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d"} Jan 21 07:35:30 crc kubenswrapper[4893]: I0121 07:35:30.235137 4893 generic.go:334] "Generic (PLEG): container finished" podID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerID="52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d" exitCode=0 Jan 21 07:35:30 crc kubenswrapper[4893]: I0121 07:35:30.235200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerDied","Data":"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d"} Jan 21 07:35:31 crc kubenswrapper[4893]: I0121 07:35:31.255217 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerStarted","Data":"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c"} Jan 21 07:35:31 crc kubenswrapper[4893]: I0121 07:35:31.281894 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-srl75" podStartSLOduration=2.478633636 podStartE2EDuration="6.281876874s" podCreationTimestamp="2026-01-21 07:35:25 +0000 UTC" firstStartedPulling="2026-01-21 07:35:27.121221569 +0000 UTC m=+2468.351567471" lastFinishedPulling="2026-01-21 07:35:30.924464767 +0000 UTC m=+2472.154810709" observedRunningTime="2026-01-21 07:35:31.279806485 +0000 UTC m=+2472.510152407" watchObservedRunningTime="2026-01-21 07:35:31.281876874 +0000 UTC m=+2472.512222776" Jan 21 07:35:32 crc kubenswrapper[4893]: I0121 07:35:32.922029 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:32 crc kubenswrapper[4893]: I0121 07:35:32.922348 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:32 crc kubenswrapper[4893]: I0121 07:35:32.972254 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:33 crc kubenswrapper[4893]: I0121 07:35:33.343309 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:34 crc kubenswrapper[4893]: I0121 07:35:34.152933 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:35 crc kubenswrapper[4893]: I0121 07:35:35.291045 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qkrp4" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="registry-server" containerID="cri-o://599c80f6fe3dc5093b56b1eb994e034e24d6021d271966bfea72d54af08c3a9e" gracePeriod=2 Jan 21 07:35:35 crc kubenswrapper[4893]: I0121 07:35:35.497109 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:35 crc kubenswrapper[4893]: I0121 07:35:35.497549 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.301575 4893 generic.go:334] "Generic (PLEG): container finished" podID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerID="599c80f6fe3dc5093b56b1eb994e034e24d6021d271966bfea72d54af08c3a9e" exitCode=0 Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.301813 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerDied","Data":"599c80f6fe3dc5093b56b1eb994e034e24d6021d271966bfea72d54af08c3a9e"} Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.302264 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkrp4" event={"ID":"02b7cb4e-f74f-4ba9-b222-ed7f85916a01","Type":"ContainerDied","Data":"798ca6a02c277403c738103c4582fc80413ab3cdcefb58aaffc05a7613b36d20"} Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.302310 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798ca6a02c277403c738103c4582fc80413ab3cdcefb58aaffc05a7613b36d20" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.306758 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.444279 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content\") pod \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.444424 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities\") pod \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.444553 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9vpx\" (UniqueName: \"kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx\") pod \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\" (UID: \"02b7cb4e-f74f-4ba9-b222-ed7f85916a01\") " Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.445465 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities" (OuterVolumeSpecName: "utilities") pod "02b7cb4e-f74f-4ba9-b222-ed7f85916a01" (UID: "02b7cb4e-f74f-4ba9-b222-ed7f85916a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.461855 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx" (OuterVolumeSpecName: "kube-api-access-l9vpx") pod "02b7cb4e-f74f-4ba9-b222-ed7f85916a01" (UID: "02b7cb4e-f74f-4ba9-b222-ed7f85916a01"). InnerVolumeSpecName "kube-api-access-l9vpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.522032 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02b7cb4e-f74f-4ba9-b222-ed7f85916a01" (UID: "02b7cb4e-f74f-4ba9-b222-ed7f85916a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.546813 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9vpx\" (UniqueName: \"kubernetes.io/projected/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-kube-api-access-l9vpx\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.547200 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.547246 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b7cb4e-f74f-4ba9-b222-ed7f85916a01-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:36 crc kubenswrapper[4893]: I0121 07:35:36.562246 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-srl75" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="registry-server" probeResult="failure" output=< Jan 21 07:35:36 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:35:36 crc kubenswrapper[4893]: > Jan 21 07:35:37 crc kubenswrapper[4893]: I0121 07:35:37.311646 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkrp4" Jan 21 07:35:37 crc kubenswrapper[4893]: I0121 07:35:37.345538 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:37 crc kubenswrapper[4893]: I0121 07:35:37.351934 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qkrp4"] Jan 21 07:35:37 crc kubenswrapper[4893]: I0121 07:35:37.607465 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" path="/var/lib/kubelet/pods/02b7cb4e-f74f-4ba9-b222-ed7f85916a01/volumes" Jan 21 07:35:41 crc kubenswrapper[4893]: I0121 07:35:41.581347 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:35:41 crc kubenswrapper[4893]: E0121 07:35:41.582226 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:35:45 crc kubenswrapper[4893]: I0121 07:35:45.578346 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:45 crc kubenswrapper[4893]: I0121 07:35:45.655604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:45 crc kubenswrapper[4893]: I0121 07:35:45.839288 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.397586 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-srl75" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="registry-server" containerID="cri-o://c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c" gracePeriod=2 Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.892566 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.917891 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content\") pod \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.917964 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87g4\" (UniqueName: \"kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4\") pod \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.918001 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities\") pod \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\" (UID: \"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f\") " Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.918926 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities" (OuterVolumeSpecName: "utilities") pod "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" (UID: "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:35:47 crc kubenswrapper[4893]: I0121 07:35:47.924354 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4" (OuterVolumeSpecName: "kube-api-access-s87g4") pod "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" (UID: "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f"). InnerVolumeSpecName "kube-api-access-s87g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.019720 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87g4\" (UniqueName: \"kubernetes.io/projected/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-kube-api-access-s87g4\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.019763 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.096492 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" (UID: "f5a8deac-81b3-4ede-9a23-e9f4b6fa891f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.121970 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.409479 4893 generic.go:334] "Generic (PLEG): container finished" podID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerID="c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c" exitCode=0 Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.409523 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerDied","Data":"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c"} Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.409742 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srl75" event={"ID":"f5a8deac-81b3-4ede-9a23-e9f4b6fa891f","Type":"ContainerDied","Data":"89dfdc7f6e0cf748521401e3a8cc4c35404f61ea08643d70ef074225b11cff18"} Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.409767 4893 scope.go:117] "RemoveContainer" containerID="c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.409594 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srl75" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.435132 4893 scope.go:117] "RemoveContainer" containerID="52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.492290 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.505371 4893 scope.go:117] "RemoveContainer" containerID="54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.513406 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-srl75"] Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.525639 4893 scope.go:117] "RemoveContainer" containerID="c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c" Jan 21 07:35:48 crc kubenswrapper[4893]: E0121 07:35:48.526351 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c\": container with ID starting with c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c not found: ID does not exist" containerID="c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.526407 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c"} err="failed to get container status \"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c\": rpc error: code = NotFound desc = could not find container \"c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c\": container with ID starting with c41e641b145c19ea27b5f42f8bd385a6013547caa43257e1cb148e37e09a527c not found: ID does not exist" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.526445 4893 scope.go:117] "RemoveContainer" containerID="52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d" Jan 21 07:35:48 crc kubenswrapper[4893]: E0121 07:35:48.527541 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d\": container with ID starting with 52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d not found: ID does not exist" containerID="52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.527574 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d"} err="failed to get container status \"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d\": rpc error: code = NotFound desc = could not find container \"52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d\": container with ID starting with 52222948a05f29c9a6288ba1a1ca733d8638536053650f916a60d5d139e9163d not found: ID does not exist" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.527598 4893 scope.go:117] "RemoveContainer" containerID="54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c" Jan 21 07:35:48 crc kubenswrapper[4893]: E0121 07:35:48.527956 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c\": container with ID starting with 54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c not found: ID does not exist" containerID="54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c" Jan 21 07:35:48 crc kubenswrapper[4893]: I0121 07:35:48.528120 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c"} err="failed to get container status \"54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c\": rpc error: code = NotFound desc = could not find container \"54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c\": container with ID starting with 54f98146387c10687c32f1282de1b4653275af931355959db7f588eaf824418c not found: ID does not exist" Jan 21 07:35:49 crc kubenswrapper[4893]: I0121 07:35:49.596917 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" path="/var/lib/kubelet/pods/f5a8deac-81b3-4ede-9a23-e9f4b6fa891f/volumes" Jan 21 07:35:55 crc kubenswrapper[4893]: I0121 07:35:55.581557 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:35:55 crc kubenswrapper[4893]: E0121 07:35:55.582667 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:36:06 crc kubenswrapper[4893]: I0121 07:36:06.581446 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:36:06 crc kubenswrapper[4893]: E0121 07:36:06.582387 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:36:20 crc kubenswrapper[4893]: I0121 07:36:20.581794 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:36:20 crc kubenswrapper[4893]: E0121 07:36:20.583140 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:36:33 crc kubenswrapper[4893]: I0121 07:36:33.581584 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:36:33 crc kubenswrapper[4893]: E0121 07:36:33.582592 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:36:46 crc kubenswrapper[4893]: I0121 07:36:46.581403 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:36:46 crc kubenswrapper[4893]: E0121 07:36:46.583655 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:37:01 crc kubenswrapper[4893]: I0121 07:37:01.581027 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:37:01 crc kubenswrapper[4893]: E0121 07:37:01.582280 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:37:12 crc kubenswrapper[4893]: I0121 07:37:12.582279 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:37:12 crc kubenswrapper[4893]: E0121 07:37:12.583345 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:37:23 crc kubenswrapper[4893]: I0121 07:37:23.581997 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:37:23 crc kubenswrapper[4893]: E0121 07:37:23.582934 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:37:38 crc kubenswrapper[4893]: I0121 07:37:38.581563 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:37:38 crc kubenswrapper[4893]: E0121 07:37:38.582659 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:37:50 crc kubenswrapper[4893]: I0121 07:37:50.581833 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:37:50 crc kubenswrapper[4893]: E0121 07:37:50.582871 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:38:03 crc kubenswrapper[4893]: I0121 07:38:03.581900 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:38:03 crc kubenswrapper[4893]: E0121 07:38:03.583589 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:38:18 crc kubenswrapper[4893]: I0121 07:38:18.581297 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:38:18 crc kubenswrapper[4893]: E0121 07:38:18.582220 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:38:30 crc kubenswrapper[4893]: I0121 07:38:30.581365 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:38:30 crc kubenswrapper[4893]: E0121 07:38:30.582351 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:38:45 crc kubenswrapper[4893]: I0121 07:38:45.581921 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:38:45 crc kubenswrapper[4893]: E0121 07:38:45.582906 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:38:56 crc kubenswrapper[4893]: I0121 07:38:56.581086 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:38:56 crc kubenswrapper[4893]: E0121 07:38:56.582935 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:39:07 crc kubenswrapper[4893]: I0121 07:39:07.581092 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:39:08 crc kubenswrapper[4893]: I0121 07:39:08.146184 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d"} Jan 21 07:41:28 crc kubenswrapper[4893]: I0121 07:41:28.657234 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:41:28 crc kubenswrapper[4893]: I0121 07:41:28.658174 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:41:35 crc kubenswrapper[4893]: I0121 07:41:35.249385 4893 scope.go:117] "RemoveContainer" containerID="599c80f6fe3dc5093b56b1eb994e034e24d6021d271966bfea72d54af08c3a9e" Jan 21 07:41:35 crc kubenswrapper[4893]: I0121 07:41:35.282139 4893 scope.go:117] "RemoveContainer" containerID="eb6826653ef9b00d335a539ac55babf9c1cf9170fab4497ba82a4eec0b84470d" Jan 21 07:41:35 crc kubenswrapper[4893]: I0121 07:41:35.308095 4893 scope.go:117] "RemoveContainer" containerID="924ac7599bc58fb85bd154781471dcca1b0b53b443f194e9b09b74437a952549" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.851117 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.853401 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.853774 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.853996 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854058 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.854121 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="extract-content" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854187 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="extract-content" Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.854256 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="extract-utilities" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854323 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="extract-utilities" Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.854396 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="extract-utilities" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854454 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="extract-utilities" Jan 21 07:41:57 crc kubenswrapper[4893]: E0121 07:41:57.854529 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="extract-content" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854596 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="extract-content" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854832 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b7cb4e-f74f-4ba9-b222-ed7f85916a01" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.854931 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a8deac-81b3-4ede-9a23-e9f4b6fa891f" containerName="registry-server" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.856391 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:57 crc kubenswrapper[4893]: I0121 07:41:57.878584 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.035222 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr65\" (UniqueName: \"kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.035789 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.035893 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.138094 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnr65\" (UniqueName: \"kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.138174 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.138219 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.139026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.139313 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.167429 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnr65\" (UniqueName: \"kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65\") pod \"redhat-marketplace-f7bfg\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.188487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.654998 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.659453 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:41:58 crc kubenswrapper[4893]: I0121 07:41:58.659521 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:41:59 crc kubenswrapper[4893]: I0121 07:41:59.337175 4893 generic.go:334] "Generic (PLEG): container finished" podID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerID="45d64819845da837155cf794e23724b383f4c36a263cbf372cc26c6324e0135c" exitCode=0 Jan 21 07:41:59 crc kubenswrapper[4893]: I0121 07:41:59.337262 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerDied","Data":"45d64819845da837155cf794e23724b383f4c36a263cbf372cc26c6324e0135c"} Jan 21 07:41:59 crc kubenswrapper[4893]: I0121 07:41:59.337305 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerStarted","Data":"c69d91697525fc4bca2515684362983dd2a948e047df9ffb2ed663df61abc3d0"} Jan 21 07:41:59 crc kubenswrapper[4893]: I0121 07:41:59.341798 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:42:01 crc kubenswrapper[4893]: I0121 07:42:01.359885 4893 generic.go:334] "Generic (PLEG): container finished" podID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerID="f0c2dea0f3354978c77e59aa03c5ba4b7919a173172257d2856c22c23ef8c3bb" exitCode=0 Jan 21 07:42:01 crc kubenswrapper[4893]: I0121 07:42:01.359962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerDied","Data":"f0c2dea0f3354978c77e59aa03c5ba4b7919a173172257d2856c22c23ef8c3bb"} Jan 21 07:42:02 crc kubenswrapper[4893]: I0121 07:42:02.377851 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerStarted","Data":"98e24d87a5d960a4ae56354e2265c2339474e41ff7776ce07120fbfa25f9bee8"} Jan 21 07:42:02 crc kubenswrapper[4893]: I0121 07:42:02.418898 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f7bfg" podStartSLOduration=2.955105986 podStartE2EDuration="5.418853487s" podCreationTimestamp="2026-01-21 07:41:57 +0000 UTC" firstStartedPulling="2026-01-21 07:41:59.341172397 +0000 UTC m=+2860.571518329" lastFinishedPulling="2026-01-21 07:42:01.804919928 +0000 UTC m=+2863.035265830" observedRunningTime="2026-01-21 07:42:02.407264056 +0000 UTC m=+2863.637610008" watchObservedRunningTime="2026-01-21 07:42:02.418853487 +0000 UTC m=+2863.649199429" Jan 21 07:42:08 crc kubenswrapper[4893]: I0121 07:42:08.343985 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:08 crc kubenswrapper[4893]: I0121 07:42:08.345261 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:08 crc kubenswrapper[4893]: I0121 07:42:08.383982 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:08 crc kubenswrapper[4893]: I0121 07:42:08.479203 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:08 crc kubenswrapper[4893]: I0121 07:42:08.799507 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:42:10 crc kubenswrapper[4893]: I0121 07:42:10.449756 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f7bfg" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="registry-server" containerID="cri-o://98e24d87a5d960a4ae56354e2265c2339474e41ff7776ce07120fbfa25f9bee8" gracePeriod=2 Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.488766 4893 generic.go:334] "Generic (PLEG): container finished" podID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerID="98e24d87a5d960a4ae56354e2265c2339474e41ff7776ce07120fbfa25f9bee8" exitCode=0 Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.488811 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerDied","Data":"98e24d87a5d960a4ae56354e2265c2339474e41ff7776ce07120fbfa25f9bee8"} Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.756898 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.884317 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content\") pod \"832ef4ef-4001-4e97-935b-df1fd714cb24\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.884458 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnr65\" (UniqueName: \"kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65\") pod \"832ef4ef-4001-4e97-935b-df1fd714cb24\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.884649 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities\") pod \"832ef4ef-4001-4e97-935b-df1fd714cb24\" (UID: \"832ef4ef-4001-4e97-935b-df1fd714cb24\") " Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.886317 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities" (OuterVolumeSpecName: "utilities") pod "832ef4ef-4001-4e97-935b-df1fd714cb24" (UID: "832ef4ef-4001-4e97-935b-df1fd714cb24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.896922 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65" (OuterVolumeSpecName: "kube-api-access-bnr65") pod "832ef4ef-4001-4e97-935b-df1fd714cb24" (UID: "832ef4ef-4001-4e97-935b-df1fd714cb24"). InnerVolumeSpecName "kube-api-access-bnr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.937059 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "832ef4ef-4001-4e97-935b-df1fd714cb24" (UID: "832ef4ef-4001-4e97-935b-df1fd714cb24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.986597 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.986651 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnr65\" (UniqueName: \"kubernetes.io/projected/832ef4ef-4001-4e97-935b-df1fd714cb24-kube-api-access-bnr65\") on node \"crc\" DevicePath \"\"" Jan 21 07:42:11 crc kubenswrapper[4893]: I0121 07:42:11.986700 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832ef4ef-4001-4e97-935b-df1fd714cb24-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.504977 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7bfg" event={"ID":"832ef4ef-4001-4e97-935b-df1fd714cb24","Type":"ContainerDied","Data":"c69d91697525fc4bca2515684362983dd2a948e047df9ffb2ed663df61abc3d0"} Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.505082 4893 scope.go:117] "RemoveContainer" containerID="98e24d87a5d960a4ae56354e2265c2339474e41ff7776ce07120fbfa25f9bee8" Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.506871 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7bfg" Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.535974 4893 scope.go:117] "RemoveContainer" containerID="f0c2dea0f3354978c77e59aa03c5ba4b7919a173172257d2856c22c23ef8c3bb" Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.584010 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.589458 4893 scope.go:117] "RemoveContainer" containerID="45d64819845da837155cf794e23724b383f4c36a263cbf372cc26c6324e0135c" Jan 21 07:42:12 crc kubenswrapper[4893]: I0121 07:42:12.594500 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7bfg"] Jan 21 07:42:13 crc kubenswrapper[4893]: I0121 07:42:13.601330 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" path="/var/lib/kubelet/pods/832ef4ef-4001-4e97-935b-df1fd714cb24/volumes" Jan 21 07:42:28 crc kubenswrapper[4893]: I0121 07:42:28.656714 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:42:28 crc kubenswrapper[4893]: I0121 07:42:28.657619 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:42:28 crc kubenswrapper[4893]: I0121 07:42:28.657711 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:42:28 crc kubenswrapper[4893]: I0121 07:42:28.700259 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:42:28 crc kubenswrapper[4893]: I0121 07:42:28.700362 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d" gracePeriod=600 Jan 21 07:42:29 crc kubenswrapper[4893]: I0121 07:42:29.725487 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d" exitCode=0 Jan 21 07:42:29 crc kubenswrapper[4893]: I0121 07:42:29.725554 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d"} Jan 21 07:42:29 crc kubenswrapper[4893]: I0121 07:42:29.727071 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e"} Jan 21 07:42:29 crc kubenswrapper[4893]: I0121 07:42:29.727225 4893 scope.go:117] "RemoveContainer" containerID="3f32a27c3e69b19ffbfae3599d698be427f1aa295f9c95dd19a4b2ab997261ed" Jan 21 07:44:58 crc kubenswrapper[4893]: I0121 07:44:58.657082 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:44:58 crc kubenswrapper[4893]: I0121 07:44:58.657937 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.158028 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9"] Jan 21 07:45:00 crc kubenswrapper[4893]: E0121 07:45:00.159052 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="registry-server" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.159068 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="registry-server" Jan 21 07:45:00 crc kubenswrapper[4893]: E0121 07:45:00.159092 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="extract-content" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.159098 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="extract-content" Jan 21 07:45:00 crc kubenswrapper[4893]: E0121 07:45:00.159111 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="extract-utilities" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.159118 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="extract-utilities" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.159317 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="832ef4ef-4001-4e97-935b-df1fd714cb24" containerName="registry-server" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.159979 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.164896 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.164892 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.168838 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9"] Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.194365 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgd2\" (UniqueName: \"kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.194560 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.194628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.296202 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgd2\" (UniqueName: \"kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.296292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.296329 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.297939 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.302861 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.319028 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgd2\" (UniqueName: \"kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2\") pod \"collect-profiles-29483025-v46q9\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.488123 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:00 crc kubenswrapper[4893]: I0121 07:45:00.981639 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9"] Jan 21 07:45:01 crc kubenswrapper[4893]: I0121 07:45:01.461185 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" event={"ID":"ed687384-b908-449c-97e8-e8ac083ed980","Type":"ContainerStarted","Data":"0d6cebd77670c2cb1dfd5730badf12989df3cc4ec4070b4c346361fac600433a"} Jan 21 07:45:01 crc kubenswrapper[4893]: I0121 07:45:01.461513 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" event={"ID":"ed687384-b908-449c-97e8-e8ac083ed980","Type":"ContainerStarted","Data":"473b7d2d8985fda2b62e68ce8a18e9a2b4011c830bded594ca44c3d4b3a5d873"} Jan 21 07:45:01 crc kubenswrapper[4893]: I0121 07:45:01.488292 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" podStartSLOduration=1.488254412 podStartE2EDuration="1.488254412s" podCreationTimestamp="2026-01-21 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:45:01.483894708 +0000 UTC m=+3042.714240610" watchObservedRunningTime="2026-01-21 07:45:01.488254412 +0000 UTC m=+3042.718600314" Jan 21 07:45:02 crc kubenswrapper[4893]: I0121 07:45:02.477012 4893 generic.go:334] "Generic (PLEG): container finished" podID="ed687384-b908-449c-97e8-e8ac083ed980" containerID="0d6cebd77670c2cb1dfd5730badf12989df3cc4ec4070b4c346361fac600433a" exitCode=0 Jan 21 07:45:02 crc kubenswrapper[4893]: I0121 07:45:02.477159 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" event={"ID":"ed687384-b908-449c-97e8-e8ac083ed980","Type":"ContainerDied","Data":"0d6cebd77670c2cb1dfd5730badf12989df3cc4ec4070b4c346361fac600433a"} Jan 21 07:45:03 crc kubenswrapper[4893]: I0121 07:45:03.934045 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.055270 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume\") pod \"ed687384-b908-449c-97e8-e8ac083ed980\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.056227 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume\") pod \"ed687384-b908-449c-97e8-e8ac083ed980\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.056281 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgd2\" (UniqueName: \"kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2\") pod \"ed687384-b908-449c-97e8-e8ac083ed980\" (UID: \"ed687384-b908-449c-97e8-e8ac083ed980\") " Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.057002 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed687384-b908-449c-97e8-e8ac083ed980" (UID: "ed687384-b908-449c-97e8-e8ac083ed980"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.064912 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed687384-b908-449c-97e8-e8ac083ed980" (UID: "ed687384-b908-449c-97e8-e8ac083ed980"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.064995 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2" (OuterVolumeSpecName: "kube-api-access-cvgd2") pod "ed687384-b908-449c-97e8-e8ac083ed980" (UID: "ed687384-b908-449c-97e8-e8ac083ed980"). InnerVolumeSpecName "kube-api-access-cvgd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.157868 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed687384-b908-449c-97e8-e8ac083ed980-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.157899 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed687384-b908-449c-97e8-e8ac083ed980-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.157910 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvgd2\" (UniqueName: \"kubernetes.io/projected/ed687384-b908-449c-97e8-e8ac083ed980-kube-api-access-cvgd2\") on node \"crc\" DevicePath \"\"" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.494639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" event={"ID":"ed687384-b908-449c-97e8-e8ac083ed980","Type":"ContainerDied","Data":"473b7d2d8985fda2b62e68ce8a18e9a2b4011c830bded594ca44c3d4b3a5d873"} Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.494728 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473b7d2d8985fda2b62e68ce8a18e9a2b4011c830bded594ca44c3d4b3a5d873" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.494797 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483025-v46q9" Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.586149 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx"] Jan 21 07:45:04 crc kubenswrapper[4893]: I0121 07:45:04.594275 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-ftdvx"] Jan 21 07:45:05 crc kubenswrapper[4893]: I0121 07:45:05.597970 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b4e122-bd3c-47f5-b6bc-a00a090a1c3a" path="/var/lib/kubelet/pods/97b4e122-bd3c-47f5-b6bc-a00a090a1c3a/volumes" Jan 21 07:45:28 crc kubenswrapper[4893]: I0121 07:45:28.656928 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:45:28 crc kubenswrapper[4893]: I0121 07:45:28.657631 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:45:35 crc kubenswrapper[4893]: I0121 07:45:35.484220 4893 scope.go:117] "RemoveContainer" containerID="d916e4aae8348c93a9dfbeb353f2bdd036925fecaed4cc3991cf098b05d2dd3b" Jan 21 07:45:58 crc kubenswrapper[4893]: I0121 07:45:58.657073 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:45:58 crc kubenswrapper[4893]: I0121 07:45:58.657723 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:45:58 crc kubenswrapper[4893]: I0121 07:45:58.657814 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:45:58 crc kubenswrapper[4893]: I0121 07:45:58.658642 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:45:58 crc kubenswrapper[4893]: I0121 07:45:58.658805 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" gracePeriod=600 Jan 21 07:45:58 crc kubenswrapper[4893]: E0121 07:45:58.793118 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:45:59 crc kubenswrapper[4893]: I0121 07:45:59.368261 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" exitCode=0 Jan 21 07:45:59 crc kubenswrapper[4893]: I0121 07:45:59.368370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e"} Jan 21 07:45:59 crc kubenswrapper[4893]: I0121 07:45:59.368512 4893 scope.go:117] "RemoveContainer" containerID="340689dafb3840cd11404ca3069da987f524cdd3f435edf8cb3bfebeffad2b7d" Jan 21 07:45:59 crc kubenswrapper[4893]: I0121 07:45:59.369292 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:45:59 crc kubenswrapper[4893]: E0121 07:45:59.370041 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:46:12 crc kubenswrapper[4893]: I0121 07:46:12.582104 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:46:12 crc kubenswrapper[4893]: E0121 07:46:12.583090 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.170484 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:21 crc kubenswrapper[4893]: E0121 07:46:21.179160 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed687384-b908-449c-97e8-e8ac083ed980" containerName="collect-profiles" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.179424 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed687384-b908-449c-97e8-e8ac083ed980" containerName="collect-profiles" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.179852 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed687384-b908-449c-97e8-e8ac083ed980" containerName="collect-profiles" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.182118 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.253378 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.285021 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.285118 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wxb\" (UniqueName: \"kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.285157 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.386872 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.386996 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wxb\" (UniqueName: \"kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.387036 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.387603 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.388019 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.407472 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wxb\" (UniqueName: \"kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb\") pod \"redhat-operators-zfcfb\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.502182 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:21 crc kubenswrapper[4893]: I0121 07:46:21.782796 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:22 crc kubenswrapper[4893]: I0121 07:46:22.622514 4893 generic.go:334] "Generic (PLEG): container finished" podID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerID="1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e" exitCode=0 Jan 21 07:46:22 crc kubenswrapper[4893]: I0121 07:46:22.622596 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerDied","Data":"1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e"} Jan 21 07:46:22 crc kubenswrapper[4893]: I0121 07:46:22.622929 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerStarted","Data":"54ced4d97fe37d790fa5136895ea8b749ed617acbb7e2bc78f670e7bf44c5130"} Jan 21 07:46:24 crc kubenswrapper[4893]: I0121 07:46:24.581252 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:46:24 crc kubenswrapper[4893]: E0121 07:46:24.581992 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:46:24 crc kubenswrapper[4893]: I0121 07:46:24.648245 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerStarted","Data":"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac"} Jan 21 07:46:25 crc kubenswrapper[4893]: I0121 07:46:25.663833 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerDied","Data":"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac"} Jan 21 07:46:25 crc kubenswrapper[4893]: I0121 07:46:25.665800 4893 generic.go:334] "Generic (PLEG): container finished" podID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerID="bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac" exitCode=0 Jan 21 07:46:26 crc kubenswrapper[4893]: I0121 07:46:26.683861 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerStarted","Data":"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0"} Jan 21 07:46:26 crc kubenswrapper[4893]: I0121 07:46:26.710423 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zfcfb" podStartSLOduration=2.222856595 podStartE2EDuration="5.710358865s" podCreationTimestamp="2026-01-21 07:46:21 +0000 UTC" firstStartedPulling="2026-01-21 07:46:22.624940652 +0000 UTC m=+3123.855286564" lastFinishedPulling="2026-01-21 07:46:26.112442922 +0000 UTC m=+3127.342788834" observedRunningTime="2026-01-21 07:46:26.706057186 +0000 UTC m=+3127.936403088" watchObservedRunningTime="2026-01-21 07:46:26.710358865 +0000 UTC m=+3127.940704767" Jan 21 07:46:31 crc kubenswrapper[4893]: I0121 07:46:31.503118 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:31 crc kubenswrapper[4893]: I0121 07:46:31.504589 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:32 crc kubenswrapper[4893]: I0121 07:46:32.566111 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zfcfb" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="registry-server" probeResult="failure" output=< Jan 21 07:46:32 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:46:32 crc kubenswrapper[4893]: > Jan 21 07:46:35 crc kubenswrapper[4893]: I0121 07:46:35.581432 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:46:35 crc kubenswrapper[4893]: E0121 07:46:35.582037 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:46:41 crc kubenswrapper[4893]: I0121 07:46:41.594199 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:41 crc kubenswrapper[4893]: I0121 07:46:41.683522 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:41 crc kubenswrapper[4893]: I0121 07:46:41.916419 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.002065 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zfcfb" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="registry-server" containerID="cri-o://0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0" gracePeriod=2 Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.605523 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.772960 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities\") pod \"677a0901-5e29-4805-9cd5-2227bd242ee0\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.773110 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content\") pod \"677a0901-5e29-4805-9cd5-2227bd242ee0\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.773173 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wxb\" (UniqueName: \"kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb\") pod \"677a0901-5e29-4805-9cd5-2227bd242ee0\" (UID: \"677a0901-5e29-4805-9cd5-2227bd242ee0\") " Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.773896 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities" (OuterVolumeSpecName: "utilities") pod "677a0901-5e29-4805-9cd5-2227bd242ee0" (UID: "677a0901-5e29-4805-9cd5-2227bd242ee0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.781905 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb" (OuterVolumeSpecName: "kube-api-access-k8wxb") pod "677a0901-5e29-4805-9cd5-2227bd242ee0" (UID: "677a0901-5e29-4805-9cd5-2227bd242ee0"). InnerVolumeSpecName "kube-api-access-k8wxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.874919 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wxb\" (UniqueName: \"kubernetes.io/projected/677a0901-5e29-4805-9cd5-2227bd242ee0-kube-api-access-k8wxb\") on node \"crc\" DevicePath \"\"" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.874976 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.907358 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "677a0901-5e29-4805-9cd5-2227bd242ee0" (UID: "677a0901-5e29-4805-9cd5-2227bd242ee0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:46:43 crc kubenswrapper[4893]: I0121 07:46:43.975990 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677a0901-5e29-4805-9cd5-2227bd242ee0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.014524 4893 generic.go:334] "Generic (PLEG): container finished" podID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerID="0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0" exitCode=0 Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.014610 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerDied","Data":"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0"} Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.014658 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfcfb" event={"ID":"677a0901-5e29-4805-9cd5-2227bd242ee0","Type":"ContainerDied","Data":"54ced4d97fe37d790fa5136895ea8b749ed617acbb7e2bc78f670e7bf44c5130"} Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.014711 4893 scope.go:117] "RemoveContainer" containerID="0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.014711 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfcfb" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.050110 4893 scope.go:117] "RemoveContainer" containerID="bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.080520 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.088457 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zfcfb"] Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.088911 4893 scope.go:117] "RemoveContainer" containerID="1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.121731 4893 scope.go:117] "RemoveContainer" containerID="0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0" Jan 21 07:46:44 crc kubenswrapper[4893]: E0121 07:46:44.122200 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0\": container with ID starting with 0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0 not found: ID does not exist" containerID="0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.122251 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0"} err="failed to get container status \"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0\": rpc error: code = NotFound desc = could not find container \"0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0\": container with ID starting with 0b59c56c1f90d75106c0329fbd9188b2f295e4de4c5698cee3e89c97da7b7ae0 not found: ID does not exist" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.122282 4893 scope.go:117] "RemoveContainer" containerID="bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac" Jan 21 07:46:44 crc kubenswrapper[4893]: E0121 07:46:44.123266 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac\": container with ID starting with bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac not found: ID does not exist" containerID="bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.123300 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac"} err="failed to get container status \"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac\": rpc error: code = NotFound desc = could not find container \"bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac\": container with ID starting with bd6b43e12c450f48fa7ecce1e91fffa591087eeb0276122ed3973068228b75ac not found: ID does not exist" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.123320 4893 scope.go:117] "RemoveContainer" containerID="1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e" Jan 21 07:46:44 crc kubenswrapper[4893]: E0121 07:46:44.123661 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e\": container with ID starting with 1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e not found: ID does not exist" containerID="1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e" Jan 21 07:46:44 crc kubenswrapper[4893]: I0121 07:46:44.123725 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e"} err="failed to get container status \"1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e\": rpc error: code = NotFound desc = could not find container \"1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e\": container with ID starting with 1b8ace26578a7229e428c424895588082afc23cab69822bc46122a14d536440e not found: ID does not exist" Jan 21 07:46:45 crc kubenswrapper[4893]: I0121 07:46:45.599940 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" path="/var/lib/kubelet/pods/677a0901-5e29-4805-9cd5-2227bd242ee0/volumes" Jan 21 07:46:50 crc kubenswrapper[4893]: I0121 07:46:50.581752 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:46:50 crc kubenswrapper[4893]: E0121 07:46:50.582578 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:03 crc kubenswrapper[4893]: I0121 07:47:03.581290 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:47:03 crc kubenswrapper[4893]: E0121 07:47:03.582518 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:17 crc kubenswrapper[4893]: I0121 07:47:17.582053 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:47:17 crc kubenswrapper[4893]: E0121 07:47:17.583336 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.115144 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rvg8p/must-gather-rvzxg"] Jan 21 07:47:28 crc kubenswrapper[4893]: E0121 07:47:28.115881 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="registry-server" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.115894 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="registry-server" Jan 21 07:47:28 crc kubenswrapper[4893]: E0121 07:47:28.115917 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="extract-content" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.115924 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="extract-content" Jan 21 07:47:28 crc kubenswrapper[4893]: E0121 07:47:28.115943 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="extract-utilities" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.115949 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="extract-utilities" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.116103 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="677a0901-5e29-4805-9cd5-2227bd242ee0" containerName="registry-server" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.116983 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.119280 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rvg8p"/"openshift-service-ca.crt" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.122753 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rvg8p"/"kube-root-ca.crt" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.133432 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rvg8p/must-gather-rvzxg"] Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.313267 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.313321 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hvk\" (UniqueName: \"kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.414275 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.414320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7hvk\" (UniqueName: \"kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.414770 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.431009 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7hvk\" (UniqueName: \"kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk\") pod \"must-gather-rvzxg\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.435085 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.746727 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rvg8p/must-gather-rvzxg"] Jan 21 07:47:28 crc kubenswrapper[4893]: I0121 07:47:28.753405 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:47:29 crc kubenswrapper[4893]: I0121 07:47:29.535946 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" event={"ID":"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc","Type":"ContainerStarted","Data":"652b701a099003e772cde3c81ba91b826ef43dfe5ac53a80e6a3f325de5d0c28"} Jan 21 07:47:29 crc kubenswrapper[4893]: I0121 07:47:29.587391 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:47:29 crc kubenswrapper[4893]: E0121 07:47:29.588901 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:36 crc kubenswrapper[4893]: I0121 07:47:36.590376 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" event={"ID":"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc","Type":"ContainerStarted","Data":"7574bd9c2a80da512156d60871d9091948555fd2f6ff1ace6109a668e3ba14ab"} Jan 21 07:47:37 crc kubenswrapper[4893]: I0121 07:47:37.604439 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" event={"ID":"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc","Type":"ContainerStarted","Data":"14c9613f22c58e05f1be22173b7c6c87a6ceb45d5e11fb7a2571c21ac3564120"} Jan 21 07:47:37 crc kubenswrapper[4893]: I0121 07:47:37.636741 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" podStartSLOduration=2.07459697 podStartE2EDuration="9.636694503s" podCreationTimestamp="2026-01-21 07:47:28 +0000 UTC" firstStartedPulling="2026-01-21 07:47:28.75332671 +0000 UTC m=+3189.983672612" lastFinishedPulling="2026-01-21 07:47:36.315424203 +0000 UTC m=+3197.545770145" observedRunningTime="2026-01-21 07:47:37.626553772 +0000 UTC m=+3198.856899674" watchObservedRunningTime="2026-01-21 07:47:37.636694503 +0000 UTC m=+3198.867040425" Jan 21 07:47:43 crc kubenswrapper[4893]: I0121 07:47:43.581925 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:47:43 crc kubenswrapper[4893]: E0121 07:47:43.583144 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:48 crc kubenswrapper[4893]: I0121 07:47:48.900574 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/extract/0.log" Jan 21 07:47:48 crc kubenswrapper[4893]: I0121 07:47:48.915622 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/util/0.log" Jan 21 07:47:48 crc kubenswrapper[4893]: I0121 07:47:48.927688 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/pull/0.log" Jan 21 07:47:48 crc kubenswrapper[4893]: I0121 07:47:48.993027 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-d8f8v_ec3cd342-ebee-4689-a339-72ca3fd65506/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.038798 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-b5mdw_a7d9e99c-b2eb-481e-be87-a69b88b6609e/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.049615 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-m9lt8_00d7ea70-2b23-491d-841f-0513cdb3652f/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.150338 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-hddtb_77cb4b5b-8911-40eb-9a0a-066503abf27f/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.162737 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-kjsg4_6ef85e8d-2997-4005-bcf3-7a99994402d0/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.182198 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-txncj_af56a391-1d1e-4b94-8ec9-f1eb4f332995/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.367351 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-xjlf7_c5280c4a-bab8-4a47-8fb4-91aab130cd63/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.380337 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-6w85k_a65f5625-37ea-46b9-9f9f-f0a9e608b890/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.428837 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-f2jht_3b13f8c5-634b-437a-9dc9-2bfbd854de9d/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.444055 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-j5c58_e58e390d-227b-4d43-9216-c208196b0192/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.488858 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tcgf7_aaae8540-3604-4523-9f39-b8bf8fd1d03c/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.531449 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-htcd2_3c023ffb-4503-4997-9fac-84414eb67f2e/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.650541 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g6gf8_4b0f2392-37e2-447f-b542-e85bf4af7af9/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.661345 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-frxpc_31bc0fab-5394-4e78-a116-2d8d09736824/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.681537 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986d69gjw_4142220f-0688-47a2-9bec-d655f97fe3c6/manager/0.log" Jan 21 07:47:49 crc kubenswrapper[4893]: I0121 07:47:49.894191 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-fx4n5_87317749-c103-4670-b65e-e7fea5002024/operator/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.693497 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-2dz2q_86f3a900-b203-4f96-b922-b7fdf0afab7b/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.746505 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mwkv7_9cd4b07c-856e-42d0-8a00-7ecf01b01924/registry-server/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.801719 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6gpxx_aad4ef7e-44ff-4da0-8a54-b8fb68017270/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.826264 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-bmm9s_ac6cc898-5b96-4a0a-8014-bf17132e44fc/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.843523 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zw22v_bac4cdab-0839-4940-9a12-bb933e88a1da/operator/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.871290 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-9hqln_f6fcb0d4-e51c-476f-9411-469bbdbd7f4e/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.926173 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v98wk_9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.934506 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-ccg72_12028a4c-13ac-46cd-862e-7a6e01614e1a/manager/0.log" Jan 21 07:47:50 crc kubenswrapper[4893]: I0121 07:47:50.946078 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6ppkr_271330f3-2299-491c-a7cc-56e7e4e5af9a/manager/0.log" Jan 21 07:47:54 crc kubenswrapper[4893]: I0121 07:47:54.580874 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:47:54 crc kubenswrapper[4893]: E0121 07:47:54.581438 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:47:55 crc kubenswrapper[4893]: I0121 07:47:55.709964 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p9bnb_118d8602-b5ce-4a7c-bf0c-17d74ce7ebda/control-plane-machine-set-operator/0.log" Jan 21 07:47:55 crc kubenswrapper[4893]: I0121 07:47:55.730884 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jc8jx_458a2b28-04ce-4c9f-840b-9130dfd79140/kube-rbac-proxy/0.log" Jan 21 07:47:55 crc kubenswrapper[4893]: I0121 07:47:55.743350 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jc8jx_458a2b28-04ce-4c9f-840b-9130dfd79140/machine-api-operator/0.log" Jan 21 07:48:02 crc kubenswrapper[4893]: I0121 07:48:02.155579 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/controller/0.log" Jan 21 07:48:02 crc kubenswrapper[4893]: I0121 07:48:02.163828 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/kube-rbac-proxy/0.log" Jan 21 07:48:02 crc kubenswrapper[4893]: I0121 07:48:02.174643 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8hklb_d9c4da05-f65a-473f-873c-2cc7fd6c4c53/frr-k8s-webhook-server/0.log" Jan 21 07:48:02 crc kubenswrapper[4893]: I0121 07:48:02.212204 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/controller/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.830410 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.841888 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/reloader/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.845644 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr-metrics/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.853325 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.859585 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy-frr/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.866693 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-frr-files/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.874057 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-reloader/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.880087 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-metrics/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.917085 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d4c4497c9-rmz6v_5b83e248-7f4d-4294-808c-91878658bf38/manager/0.log" Jan 21 07:48:03 crc kubenswrapper[4893]: I0121 07:48:03.927331 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-856864dc54-jk8lc_ce067ec2-2d04-4566-8868-62c78e8c64f3/webhook-server/0.log" Jan 21 07:48:04 crc kubenswrapper[4893]: I0121 07:48:04.226084 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/speaker/0.log" Jan 21 07:48:04 crc kubenswrapper[4893]: I0121 07:48:04.241535 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/kube-rbac-proxy/0.log" Jan 21 07:48:06 crc kubenswrapper[4893]: I0121 07:48:06.581708 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:48:06 crc kubenswrapper[4893]: E0121 07:48:06.582311 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:48:07 crc kubenswrapper[4893]: I0121 07:48:07.204280 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-9bp8c_1590bf32-4ee6-47a5-baac-14c054272f8e/cert-manager-controller/0.log" Jan 21 07:48:07 crc kubenswrapper[4893]: I0121 07:48:07.216747 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fqpm5_7e801f1e-3a74-45d5-9f8c-5fee35cc9fac/cert-manager-cainjector/0.log" Jan 21 07:48:07 crc kubenswrapper[4893]: I0121 07:48:07.230272 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6szpd_4a781d80-f82d-4d7b-8974-b3cda4d98186/cert-manager-webhook/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.208391 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mp8lb_594278ba-8824-49d6-9b6d-a5a0e8dd66ae/nmstate-console-plugin/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.234977 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9bmdw_dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61/nmstate-handler/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.249978 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5l8kf_df7d5aed-8a1d-4936-a9a2-75d9d2228de5/nmstate-metrics/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.263324 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5l8kf_df7d5aed-8a1d-4936-a9a2-75d9d2228de5/kube-rbac-proxy/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.281558 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q822p_cf682009-b6d7-4665-ab3f-5894a39a3a09/nmstate-operator/0.log" Jan 21 07:48:15 crc kubenswrapper[4893]: I0121 07:48:15.297242 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j6t2h_187ac6cf-a917-4345-983b-a806aa8906b9/nmstate-webhook/0.log" Jan 21 07:48:19 crc kubenswrapper[4893]: I0121 07:48:19.590272 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:48:19 crc kubenswrapper[4893]: E0121 07:48:19.591247 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:48:31 crc kubenswrapper[4893]: I0121 07:48:31.175485 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/controller/0.log" Jan 21 07:48:31 crc kubenswrapper[4893]: I0121 07:48:31.182655 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/kube-rbac-proxy/0.log" Jan 21 07:48:31 crc kubenswrapper[4893]: I0121 07:48:31.194973 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8hklb_d9c4da05-f65a-473f-873c-2cc7fd6c4c53/frr-k8s-webhook-server/0.log" Jan 21 07:48:31 crc kubenswrapper[4893]: I0121 07:48:31.218030 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/controller/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.862861 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.873755 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/reloader/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.879695 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr-metrics/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.888626 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.896190 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy-frr/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.902427 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-frr-files/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.909460 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-reloader/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.915654 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-metrics/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.949351 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d4c4497c9-rmz6v_5b83e248-7f4d-4294-808c-91878658bf38/manager/0.log" Jan 21 07:48:32 crc kubenswrapper[4893]: I0121 07:48:32.959405 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-856864dc54-jk8lc_ce067ec2-2d04-4566-8868-62c78e8c64f3/webhook-server/0.log" Jan 21 07:48:33 crc kubenswrapper[4893]: I0121 07:48:33.309180 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/speaker/0.log" Jan 21 07:48:33 crc kubenswrapper[4893]: I0121 07:48:33.316452 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/kube-rbac-proxy/0.log" Jan 21 07:48:34 crc kubenswrapper[4893]: I0121 07:48:34.581305 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:48:34 crc kubenswrapper[4893]: E0121 07:48:34.582103 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.399062 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb_66507ef1-092c-4201-a33f-bbf8851600e3/extract/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.415686 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb_66507ef1-092c-4201-a33f-bbf8851600e3/util/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.467750 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a9bmgb_66507ef1-092c-4201-a33f-bbf8851600e3/pull/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.485934 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw_37a85c97-b472-420e-bf43-80cd104a53b7/extract/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.496368 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw_37a85c97-b472-420e-bf43-80cd104a53b7/util/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.507232 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmw4pw_37a85c97-b472-420e-bf43-80cd104a53b7/pull/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.522346 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr_b962be1e-48b4-482c-a8a6-c6346dbdc835/extract/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.542195 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr_b962be1e-48b4-482c-a8a6-c6346dbdc835/util/0.log" Jan 21 07:48:37 crc kubenswrapper[4893]: I0121 07:48:37.555251 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713r8vdr_b962be1e-48b4-482c-a8a6-c6346dbdc835/pull/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.032037 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kpngd_5d6e0099-366c-4a80-9911-88b9a1ac3224/registry-server/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.038743 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kpngd_5d6e0099-366c-4a80-9911-88b9a1ac3224/extract-utilities/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.045881 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kpngd_5d6e0099-366c-4a80-9911-88b9a1ac3224/extract-content/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.544905 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kdhdx_97ad217b-b5b4-49ff-9a11-e6e78e871f69/registry-server/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.550951 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kdhdx_97ad217b-b5b4-49ff-9a11-e6e78e871f69/extract-utilities/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.563220 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kdhdx_97ad217b-b5b4-49ff-9a11-e6e78e871f69/extract-content/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.581026 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5rhqg_2138e3c3-e583-4a97-84d9-084c1eb72e2a/marketplace-operator/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.720301 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78fbz_bd1b9df0-d8a3-4418-9d7d-39413613fbfc/registry-server/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.728871 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78fbz_bd1b9df0-d8a3-4418-9d7d-39413613fbfc/extract-utilities/0.log" Jan 21 07:48:38 crc kubenswrapper[4893]: I0121 07:48:38.734706 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78fbz_bd1b9df0-d8a3-4418-9d7d-39413613fbfc/extract-content/0.log" Jan 21 07:48:39 crc kubenswrapper[4893]: I0121 07:48:39.311049 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2x6tf_939a64aa-242b-4e64-8d78-48770fb3063d/registry-server/0.log" Jan 21 07:48:39 crc kubenswrapper[4893]: I0121 07:48:39.316349 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2x6tf_939a64aa-242b-4e64-8d78-48770fb3063d/extract-utilities/0.log" Jan 21 07:48:39 crc kubenswrapper[4893]: I0121 07:48:39.324815 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2x6tf_939a64aa-242b-4e64-8d78-48770fb3063d/extract-content/0.log" Jan 21 07:48:49 crc kubenswrapper[4893]: I0121 07:48:49.592957 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:48:49 crc kubenswrapper[4893]: E0121 07:48:49.593630 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:49:03 crc kubenswrapper[4893]: I0121 07:49:03.581426 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:49:03 crc kubenswrapper[4893]: E0121 07:49:03.582384 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:49:19 crc kubenswrapper[4893]: I0121 07:49:19.586927 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:49:19 crc kubenswrapper[4893]: E0121 07:49:19.587650 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.361970 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-9bp8c_1590bf32-4ee6-47a5-baac-14c054272f8e/cert-manager-controller/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.386498 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fqpm5_7e801f1e-3a74-45d5-9f8c-5fee35cc9fac/cert-manager-cainjector/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.406497 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6szpd_4a781d80-f82d-4d7b-8974-b3cda4d98186/cert-manager-webhook/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.866970 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/controller/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.877892 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tktrv_36c52d4d-2838-40a8-a87a-b931b770498a/kube-rbac-proxy/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.889770 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8hklb_d9c4da05-f65a-473f-873c-2cc7fd6c4c53/frr-k8s-webhook-server/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.924468 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/controller/0.log" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.955058 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.960468 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:28 crc kubenswrapper[4893]: I0121 07:49:28.995702 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.091499 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.091579 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.091636 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gz7\" (UniqueName: \"kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.194028 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gz7\" (UniqueName: \"kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.194133 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.194170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.194982 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.195137 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.215603 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gz7\" (UniqueName: \"kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7\") pod \"certified-operators-p8hpt\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:29 crc kubenswrapper[4893]: I0121 07:49:29.286374 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.057304 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.213292 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.221779 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/reloader/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.227647 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/frr-metrics/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.234420 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.244682 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/kube-rbac-proxy-frr/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.251919 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-frr-files/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.258555 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-reloader/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.259616 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/extract/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.266344 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/util/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.267723 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-x2lfk_7a8b031f-dd1d-425c-86c5-8ffe34ed8cb2/cp-metrics/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.275077 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/pull/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.292384 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d4c4497c9-rmz6v_5b83e248-7f4d-4294-808c-91878658bf38/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.307424 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-856864dc54-jk8lc_ce067ec2-2d04-4566-8868-62c78e8c64f3/webhook-server/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.428997 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-d8f8v_ec3cd342-ebee-4689-a339-72ca3fd65506/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.529827 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-b5mdw_a7d9e99c-b2eb-481e-be87-a69b88b6609e/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.542026 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-m9lt8_00d7ea70-2b23-491d-841f-0513cdb3652f/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.580455 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:49:30 crc kubenswrapper[4893]: E0121 07:49:30.580822 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.673154 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-hddtb_77cb4b5b-8911-40eb-9a0a-066503abf27f/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.686533 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-kjsg4_6ef85e8d-2997-4005-bcf3-7a99994402d0/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.697458 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-txncj_af56a391-1d1e-4b94-8ec9-f1eb4f332995/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.831482 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/speaker/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.840091 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq57r_46f9fdd5-a28f-4966-9100-d15a3d829cd1/kube-rbac-proxy/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.948247 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-xjlf7_c5280c4a-bab8-4a47-8fb4-91aab130cd63/manager/0.log" Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.958867 4893 generic.go:334] "Generic (PLEG): container finished" podID="c058957a-ac5e-4187-b39c-1485ac520188" containerID="16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4" exitCode=0 Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.959088 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerDied","Data":"16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4"} Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.959183 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerStarted","Data":"d10f0bba4f5488289b0098a8cf75b6db5a04e37fe5dc01a8a6c38b51a2a0c5e5"} Jan 21 07:49:30 crc kubenswrapper[4893]: I0121 07:49:30.959403 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-6w85k_a65f5625-37ea-46b9-9f9f-f0a9e608b890/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.205286 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-f2jht_3b13f8c5-634b-437a-9dc9-2bfbd854de9d/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.221688 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-j5c58_e58e390d-227b-4d43-9216-c208196b0192/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.261512 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tcgf7_aaae8540-3604-4523-9f39-b8bf8fd1d03c/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.305946 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-htcd2_3c023ffb-4503-4997-9fac-84414eb67f2e/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.350473 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.364282 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.371823 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.371864 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.371928 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zz5\" (UniqueName: \"kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.383101 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.397181 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g6gf8_4b0f2392-37e2-447f-b542-e85bf4af7af9/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.420968 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-frxpc_31bc0fab-5394-4e78-a116-2d8d09736824/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.436894 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986d69gjw_4142220f-0688-47a2-9bec-d655f97fe3c6/manager/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.472949 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.473000 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.473085 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2zz5\" (UniqueName: \"kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.473771 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.473818 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.491347 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2zz5\" (UniqueName: \"kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5\") pod \"community-operators-4mmll\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.571123 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-fx4n5_87317749-c103-4670-b65e-e7fea5002024/operator/0.log" Jan 21 07:49:31 crc kubenswrapper[4893]: I0121 07:49:31.695913 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.364691 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.905463 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-2dz2q_86f3a900-b203-4f96-b922-b7fdf0afab7b/manager/0.log" Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.912001 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-9bp8c_1590bf32-4ee6-47a5-baac-14c054272f8e/cert-manager-controller/0.log" Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.932019 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fqpm5_7e801f1e-3a74-45d5-9f8c-5fee35cc9fac/cert-manager-cainjector/0.log" Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.944186 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6szpd_4a781d80-f82d-4d7b-8974-b3cda4d98186/cert-manager-webhook/0.log" Jan 21 07:49:32 crc kubenswrapper[4893]: I0121 07:49:32.967155 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mwkv7_9cd4b07c-856e-42d0-8a00-7ecf01b01924/registry-server/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.052443 4893 generic.go:334] "Generic (PLEG): container finished" podID="c058957a-ac5e-4187-b39c-1485ac520188" containerID="3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d" exitCode=0 Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.052563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerDied","Data":"3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d"} Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.055887 4893 generic.go:334] "Generic (PLEG): container finished" podID="56f5a2de-2748-474d-8b7f-40e94067673b" containerID="da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0" exitCode=0 Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.055938 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerDied","Data":"da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0"} Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.055965 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerStarted","Data":"9e8594bd07403848192282e124d6ee49046667e04fd16c2cfb32a7628eed2717"} Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.070808 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6gpxx_aad4ef7e-44ff-4da0-8a54-b8fb68017270/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.092816 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-bmm9s_ac6cc898-5b96-4a0a-8014-bf17132e44fc/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.103845 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zw22v_bac4cdab-0839-4940-9a12-bb933e88a1da/operator/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.130459 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-9hqln_f6fcb0d4-e51c-476f-9411-469bbdbd7f4e/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.187868 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v98wk_9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.197512 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-ccg72_12028a4c-13ac-46cd-862e-7a6e01614e1a/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.207648 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6ppkr_271330f3-2299-491c-a7cc-56e7e4e5af9a/manager/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.659126 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-p9bnb_118d8602-b5ce-4a7c-bf0c-17d74ce7ebda/control-plane-machine-set-operator/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.685496 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jc8jx_458a2b28-04ce-4c9f-840b-9130dfd79140/kube-rbac-proxy/0.log" Jan 21 07:49:33 crc kubenswrapper[4893]: I0121 07:49:33.698490 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jc8jx_458a2b28-04ce-4c9f-840b-9130dfd79140/machine-api-operator/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.065633 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerStarted","Data":"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6"} Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.069664 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerStarted","Data":"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3"} Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.094649 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p8hpt" podStartSLOduration=3.543198522 podStartE2EDuration="6.094024843s" podCreationTimestamp="2026-01-21 07:49:28 +0000 UTC" firstStartedPulling="2026-01-21 07:49:30.960461899 +0000 UTC m=+3312.190807801" lastFinishedPulling="2026-01-21 07:49:33.51128822 +0000 UTC m=+3314.741634122" observedRunningTime="2026-01-21 07:49:34.088176381 +0000 UTC m=+3315.318522293" watchObservedRunningTime="2026-01-21 07:49:34.094024843 +0000 UTC m=+3315.324370745" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.520737 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/extract/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.531513 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/util/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.545257 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2egsst6_88357780-da4e-4ab0-810d-3271b6f37bfc/pull/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.630227 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-d8f8v_ec3cd342-ebee-4689-a339-72ca3fd65506/manager/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.669478 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-b5mdw_a7d9e99c-b2eb-481e-be87-a69b88b6609e/manager/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.682410 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-m9lt8_00d7ea70-2b23-491d-841f-0513cdb3652f/manager/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.698844 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mp8lb_594278ba-8824-49d6-9b6d-a5a0e8dd66ae/nmstate-console-plugin/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.720525 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9bmdw_dea30d59-92e1-4ebf-ba1a-5e5f18cbeb61/nmstate-handler/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.737603 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5l8kf_df7d5aed-8a1d-4936-a9a2-75d9d2228de5/nmstate-metrics/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.748950 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5l8kf_df7d5aed-8a1d-4936-a9a2-75d9d2228de5/kube-rbac-proxy/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.764655 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q822p_cf682009-b6d7-4665-ab3f-5894a39a3a09/nmstate-operator/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.777058 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j6t2h_187ac6cf-a917-4345-983b-a806aa8906b9/nmstate-webhook/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.778141 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-hddtb_77cb4b5b-8911-40eb-9a0a-066503abf27f/manager/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.786711 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-kjsg4_6ef85e8d-2997-4005-bcf3-7a99994402d0/manager/0.log" Jan 21 07:49:34 crc kubenswrapper[4893]: I0121 07:49:34.804022 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-txncj_af56a391-1d1e-4b94-8ec9-f1eb4f332995/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.068655 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-xjlf7_c5280c4a-bab8-4a47-8fb4-91aab130cd63/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.078618 4893 generic.go:334] "Generic (PLEG): container finished" podID="56f5a2de-2748-474d-8b7f-40e94067673b" containerID="4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3" exitCode=0 Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.078854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerDied","Data":"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3"} Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.080302 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-6w85k_a65f5625-37ea-46b9-9f9f-f0a9e608b890/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.181333 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-f2jht_3b13f8c5-634b-437a-9dc9-2bfbd854de9d/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.190449 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-j5c58_e58e390d-227b-4d43-9216-c208196b0192/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.226757 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tcgf7_aaae8540-3604-4523-9f39-b8bf8fd1d03c/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.265115 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-htcd2_3c023ffb-4503-4997-9fac-84414eb67f2e/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.419889 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g6gf8_4b0f2392-37e2-447f-b542-e85bf4af7af9/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.429869 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-frxpc_31bc0fab-5394-4e78-a116-2d8d09736824/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.446150 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986d69gjw_4142220f-0688-47a2-9bec-d655f97fe3c6/manager/0.log" Jan 21 07:49:35 crc kubenswrapper[4893]: I0121 07:49:35.627887 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-fx4n5_87317749-c103-4670-b65e-e7fea5002024/operator/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.091206 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerStarted","Data":"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78"} Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.108435 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mmll" podStartSLOduration=2.604122492 podStartE2EDuration="5.108416735s" podCreationTimestamp="2026-01-21 07:49:31 +0000 UTC" firstStartedPulling="2026-01-21 07:49:33.058749404 +0000 UTC m=+3314.289095306" lastFinishedPulling="2026-01-21 07:49:35.563043647 +0000 UTC m=+3316.793389549" observedRunningTime="2026-01-21 07:49:36.107201701 +0000 UTC m=+3317.337547613" watchObservedRunningTime="2026-01-21 07:49:36.108416735 +0000 UTC m=+3317.338762637" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.525226 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-2dz2q_86f3a900-b203-4f96-b922-b7fdf0afab7b/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.594817 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mwkv7_9cd4b07c-856e-42d0-8a00-7ecf01b01924/registry-server/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.661606 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6gpxx_aad4ef7e-44ff-4da0-8a54-b8fb68017270/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.694148 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-bmm9s_ac6cc898-5b96-4a0a-8014-bf17132e44fc/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.708688 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zw22v_bac4cdab-0839-4940-9a12-bb933e88a1da/operator/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.740705 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-9hqln_f6fcb0d4-e51c-476f-9411-469bbdbd7f4e/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.809270 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v98wk_9c6d7f75-6c22-44ec-ba62-a1223f2eaa3b/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.819517 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-ccg72_12028a4c-13ac-46cd-862e-7a6e01614e1a/manager/0.log" Jan 21 07:49:36 crc kubenswrapper[4893]: I0121 07:49:36.845548 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-6ppkr_271330f3-2299-491c-a7cc-56e7e4e5af9a/manager/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.944493 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/kube-multus-additional-cni-plugins/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.952970 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/egress-router-binary-copy/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.959894 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/cni-plugins/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.968941 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/bond-cni-plugin/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.975896 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/routeoverride-cni/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.983644 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/whereabouts-cni-bincopy/0.log" Jan 21 07:49:38 crc kubenswrapper[4893]: I0121 07:49:38.997904 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h28gn_708c6ae7-fdf7-44d1-ae88-f6abbb247f93/whereabouts-cni/0.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.035791 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lszzb_be4fc165-16c3-442f-b61d-bec9bbeb9b0f/multus-admission-controller/0.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.043335 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-lszzb_be4fc165-16c3-442f-b61d-bec9bbeb9b0f/kube-rbac-proxy/0.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.111591 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/2.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.166991 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m8k4g_ecb64775-90e7-43a2-a5a8-4d73e348dcc4/kube-multus/3.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.225838 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rc5gb_e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8/network-metrics-daemon/0.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.232863 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-rc5gb_e25a3285-f1ff-4fe3-aa36-fff5b49f1cc8/kube-rbac-proxy/0.log" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.286772 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.286827 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:39 crc kubenswrapper[4893]: I0121 07:49:39.351764 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:40 crc kubenswrapper[4893]: I0121 07:49:40.166085 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:40 crc kubenswrapper[4893]: I0121 07:49:40.224092 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:41 crc kubenswrapper[4893]: I0121 07:49:41.696860 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:41 crc kubenswrapper[4893]: I0121 07:49:41.697236 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:41 crc kubenswrapper[4893]: I0121 07:49:41.762057 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:42 crc kubenswrapper[4893]: I0121 07:49:42.146239 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p8hpt" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="registry-server" containerID="cri-o://38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6" gracePeriod=2 Jan 21 07:49:42 crc kubenswrapper[4893]: I0121 07:49:42.264264 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.352787 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.787449 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.901896 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities\") pod \"c058957a-ac5e-4187-b39c-1485ac520188\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.902009 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64gz7\" (UniqueName: \"kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7\") pod \"c058957a-ac5e-4187-b39c-1485ac520188\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.902111 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content\") pod \"c058957a-ac5e-4187-b39c-1485ac520188\" (UID: \"c058957a-ac5e-4187-b39c-1485ac520188\") " Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.903576 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities" (OuterVolumeSpecName: "utilities") pod "c058957a-ac5e-4187-b39c-1485ac520188" (UID: "c058957a-ac5e-4187-b39c-1485ac520188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.909386 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7" (OuterVolumeSpecName: "kube-api-access-64gz7") pod "c058957a-ac5e-4187-b39c-1485ac520188" (UID: "c058957a-ac5e-4187-b39c-1485ac520188"). InnerVolumeSpecName "kube-api-access-64gz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:49:43 crc kubenswrapper[4893]: I0121 07:49:43.960201 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c058957a-ac5e-4187-b39c-1485ac520188" (UID: "c058957a-ac5e-4187-b39c-1485ac520188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.003272 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.003311 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c058957a-ac5e-4187-b39c-1485ac520188-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.003323 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64gz7\" (UniqueName: \"kubernetes.io/projected/c058957a-ac5e-4187-b39c-1485ac520188-kube-api-access-64gz7\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.171896 4893 generic.go:334] "Generic (PLEG): container finished" podID="c058957a-ac5e-4187-b39c-1485ac520188" containerID="38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6" exitCode=0 Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.172031 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerDied","Data":"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6"} Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.172352 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p8hpt" event={"ID":"c058957a-ac5e-4187-b39c-1485ac520188","Type":"ContainerDied","Data":"d10f0bba4f5488289b0098a8cf75b6db5a04e37fe5dc01a8a6c38b51a2a0c5e5"} Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.172130 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p8hpt" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.172572 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mmll" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="registry-server" containerID="cri-o://cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78" gracePeriod=2 Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.172413 4893 scope.go:117] "RemoveContainer" containerID="38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.211113 4893 scope.go:117] "RemoveContainer" containerID="3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.242478 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.249447 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p8hpt"] Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.301155 4893 scope.go:117] "RemoveContainer" containerID="16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.367454 4893 scope.go:117] "RemoveContainer" containerID="38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6" Jan 21 07:49:44 crc kubenswrapper[4893]: E0121 07:49:44.368094 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6\": container with ID starting with 38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6 not found: ID does not exist" containerID="38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.368182 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6"} err="failed to get container status \"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6\": rpc error: code = NotFound desc = could not find container \"38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6\": container with ID starting with 38986bf5a32a81c0c4cb7b477b40c97b2c27fc95c0972b061201094a5892bcd6 not found: ID does not exist" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.368221 4893 scope.go:117] "RemoveContainer" containerID="3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d" Jan 21 07:49:44 crc kubenswrapper[4893]: E0121 07:49:44.368929 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d\": container with ID starting with 3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d not found: ID does not exist" containerID="3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.368968 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d"} err="failed to get container status \"3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d\": rpc error: code = NotFound desc = could not find container \"3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d\": container with ID starting with 3428165d80f7f1d90b1c087c57e5db740b14b77ac512813a71d2f3fd31e2ff0d not found: ID does not exist" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.368997 4893 scope.go:117] "RemoveContainer" containerID="16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4" Jan 21 07:49:44 crc kubenswrapper[4893]: E0121 07:49:44.369328 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4\": container with ID starting with 16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4 not found: ID does not exist" containerID="16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.369364 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4"} err="failed to get container status \"16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4\": rpc error: code = NotFound desc = could not find container \"16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4\": container with ID starting with 16350a407f674b4b75e8816419f906e13962e0788058cde2e6ee3e6db5618ee4 not found: ID does not exist" Jan 21 07:49:44 crc kubenswrapper[4893]: I0121 07:49:44.960781 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.022320 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2zz5\" (UniqueName: \"kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5\") pod \"56f5a2de-2748-474d-8b7f-40e94067673b\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.022428 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content\") pod \"56f5a2de-2748-474d-8b7f-40e94067673b\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.028126 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5" (OuterVolumeSpecName: "kube-api-access-b2zz5") pod "56f5a2de-2748-474d-8b7f-40e94067673b" (UID: "56f5a2de-2748-474d-8b7f-40e94067673b"). InnerVolumeSpecName "kube-api-access-b2zz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.108029 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56f5a2de-2748-474d-8b7f-40e94067673b" (UID: "56f5a2de-2748-474d-8b7f-40e94067673b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.123392 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities\") pod \"56f5a2de-2748-474d-8b7f-40e94067673b\" (UID: \"56f5a2de-2748-474d-8b7f-40e94067673b\") " Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.124565 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.124594 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2zz5\" (UniqueName: \"kubernetes.io/projected/56f5a2de-2748-474d-8b7f-40e94067673b-kube-api-access-b2zz5\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.126424 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities" (OuterVolumeSpecName: "utilities") pod "56f5a2de-2748-474d-8b7f-40e94067673b" (UID: "56f5a2de-2748-474d-8b7f-40e94067673b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.194465 4893 generic.go:334] "Generic (PLEG): container finished" podID="56f5a2de-2748-474d-8b7f-40e94067673b" containerID="cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78" exitCode=0 Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.194514 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerDied","Data":"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78"} Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.194571 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mmll" event={"ID":"56f5a2de-2748-474d-8b7f-40e94067673b","Type":"ContainerDied","Data":"9e8594bd07403848192282e124d6ee49046667e04fd16c2cfb32a7628eed2717"} Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.194576 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mmll" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.194594 4893 scope.go:117] "RemoveContainer" containerID="cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.217975 4893 scope.go:117] "RemoveContainer" containerID="4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.229285 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f5a2de-2748-474d-8b7f-40e94067673b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.244806 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.247848 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mmll"] Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.257654 4893 scope.go:117] "RemoveContainer" containerID="da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.283385 4893 scope.go:117] "RemoveContainer" containerID="cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78" Jan 21 07:49:45 crc kubenswrapper[4893]: E0121 07:49:45.283888 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78\": container with ID starting with cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78 not found: ID does not exist" containerID="cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.283919 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78"} err="failed to get container status \"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78\": rpc error: code = NotFound desc = could not find container \"cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78\": container with ID starting with cd28064a1f13de6460800f370765533f7024a1ffc78d30e0f59eb66ac6ba4f78 not found: ID does not exist" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.283951 4893 scope.go:117] "RemoveContainer" containerID="4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3" Jan 21 07:49:45 crc kubenswrapper[4893]: E0121 07:49:45.284553 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3\": container with ID starting with 4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3 not found: ID does not exist" containerID="4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.284590 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3"} err="failed to get container status \"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3\": rpc error: code = NotFound desc = could not find container \"4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3\": container with ID starting with 4bfa8600e047d6de23484876dee6265d588161a5656499b8650c1368f4b35fc3 not found: ID does not exist" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.284608 4893 scope.go:117] "RemoveContainer" containerID="da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0" Jan 21 07:49:45 crc kubenswrapper[4893]: E0121 07:49:45.284870 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0\": container with ID starting with da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0 not found: ID does not exist" containerID="da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.284895 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0"} err="failed to get container status \"da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0\": rpc error: code = NotFound desc = could not find container \"da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0\": container with ID starting with da5ff2cb6f75d59b65d38a8b49af61e061a5a5c33f42aa99f0c405d0a14f89e0 not found: ID does not exist" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.582469 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:49:45 crc kubenswrapper[4893]: E0121 07:49:45.583128 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.601164 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" path="/var/lib/kubelet/pods/56f5a2de-2748-474d-8b7f-40e94067673b/volumes" Jan 21 07:49:45 crc kubenswrapper[4893]: I0121 07:49:45.602329 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c058957a-ac5e-4187-b39c-1485ac520188" path="/var/lib/kubelet/pods/c058957a-ac5e-4187-b39c-1485ac520188/volumes" Jan 21 07:49:57 crc kubenswrapper[4893]: I0121 07:49:57.581259 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:49:57 crc kubenswrapper[4893]: E0121 07:49:57.582455 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:50:10 crc kubenswrapper[4893]: I0121 07:50:10.581281 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:50:10 crc kubenswrapper[4893]: E0121 07:50:10.582400 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:50:22 crc kubenswrapper[4893]: I0121 07:50:22.581120 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:50:22 crc kubenswrapper[4893]: E0121 07:50:22.582129 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:50:35 crc kubenswrapper[4893]: I0121 07:50:35.581194 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:50:35 crc kubenswrapper[4893]: E0121 07:50:35.582153 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:50:50 crc kubenswrapper[4893]: I0121 07:50:50.697347 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:50:50 crc kubenswrapper[4893]: E0121 07:50:50.698176 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:51:04 crc kubenswrapper[4893]: I0121 07:51:04.581564 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:51:05 crc kubenswrapper[4893]: I0121 07:51:05.146615 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4"} Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.816069 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817135 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="extract-content" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817169 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="extract-content" Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817214 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817221 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817233 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="extract-utilities" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817242 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="extract-utilities" Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817256 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="extract-content" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817263 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="extract-content" Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817275 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817283 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: E0121 07:52:34.817297 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="extract-utilities" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817304 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="extract-utilities" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817560 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c058957a-ac5e-4187-b39c-1485ac520188" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.817583 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f5a2de-2748-474d-8b7f-40e94067673b" containerName="registry-server" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.819051 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.832583 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.974141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjwp\" (UniqueName: \"kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.974457 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:34 crc kubenswrapper[4893]: I0121 07:52:34.974504 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.075958 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjwp\" (UniqueName: \"kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.076024 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.076059 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.076711 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.077147 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.098015 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjwp\" (UniqueName: \"kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp\") pod \"redhat-marketplace-kzzb2\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.150787 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:35 crc kubenswrapper[4893]: I0121 07:52:35.674741 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:36 crc kubenswrapper[4893]: I0121 07:52:36.536531 4893 generic.go:334] "Generic (PLEG): container finished" podID="4e256274-18b7-47d4-97ca-73668f51579e" containerID="66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64" exitCode=0 Jan 21 07:52:36 crc kubenswrapper[4893]: I0121 07:52:36.536593 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerDied","Data":"66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64"} Jan 21 07:52:36 crc kubenswrapper[4893]: I0121 07:52:36.536636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerStarted","Data":"27104aef79e27d1d8490f019935845fba49f37c8eb07a2950509fa3b03f02e97"} Jan 21 07:52:36 crc kubenswrapper[4893]: I0121 07:52:36.540977 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:52:38 crc kubenswrapper[4893]: I0121 07:52:38.580332 4893 generic.go:334] "Generic (PLEG): container finished" podID="4e256274-18b7-47d4-97ca-73668f51579e" containerID="a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149" exitCode=0 Jan 21 07:52:38 crc kubenswrapper[4893]: I0121 07:52:38.580408 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerDied","Data":"a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149"} Jan 21 07:52:39 crc kubenswrapper[4893]: I0121 07:52:39.606937 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerStarted","Data":"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af"} Jan 21 07:52:39 crc kubenswrapper[4893]: I0121 07:52:39.625886 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzzb2" podStartSLOduration=2.804903931 podStartE2EDuration="5.625848167s" podCreationTimestamp="2026-01-21 07:52:34 +0000 UTC" firstStartedPulling="2026-01-21 07:52:36.540324074 +0000 UTC m=+3497.770669976" lastFinishedPulling="2026-01-21 07:52:39.36126831 +0000 UTC m=+3500.591614212" observedRunningTime="2026-01-21 07:52:39.624158049 +0000 UTC m=+3500.854503961" watchObservedRunningTime="2026-01-21 07:52:39.625848167 +0000 UTC m=+3500.856194069" Jan 21 07:52:45 crc kubenswrapper[4893]: I0121 07:52:45.151893 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:45 crc kubenswrapper[4893]: I0121 07:52:45.153564 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:45 crc kubenswrapper[4893]: I0121 07:52:45.214742 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:45 crc kubenswrapper[4893]: I0121 07:52:45.688299 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:45 crc kubenswrapper[4893]: I0121 07:52:45.759009 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:47 crc kubenswrapper[4893]: I0121 07:52:47.663365 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzzb2" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="registry-server" containerID="cri-o://357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af" gracePeriod=2 Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.255448 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.286516 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities\") pod \"4e256274-18b7-47d4-97ca-73668f51579e\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.286574 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjjwp\" (UniqueName: \"kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp\") pod \"4e256274-18b7-47d4-97ca-73668f51579e\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.286630 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content\") pod \"4e256274-18b7-47d4-97ca-73668f51579e\" (UID: \"4e256274-18b7-47d4-97ca-73668f51579e\") " Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.288618 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities" (OuterVolumeSpecName: "utilities") pod "4e256274-18b7-47d4-97ca-73668f51579e" (UID: "4e256274-18b7-47d4-97ca-73668f51579e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.312106 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp" (OuterVolumeSpecName: "kube-api-access-jjjwp") pod "4e256274-18b7-47d4-97ca-73668f51579e" (UID: "4e256274-18b7-47d4-97ca-73668f51579e"). InnerVolumeSpecName "kube-api-access-jjjwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.315274 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e256274-18b7-47d4-97ca-73668f51579e" (UID: "4e256274-18b7-47d4-97ca-73668f51579e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.388269 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.388331 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjjwp\" (UniqueName: \"kubernetes.io/projected/4e256274-18b7-47d4-97ca-73668f51579e-kube-api-access-jjjwp\") on node \"crc\" DevicePath \"\"" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.388356 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e256274-18b7-47d4-97ca-73668f51579e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.671385 4893 generic.go:334] "Generic (PLEG): container finished" podID="4e256274-18b7-47d4-97ca-73668f51579e" containerID="357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af" exitCode=0 Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.671445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerDied","Data":"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af"} Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.671481 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzb2" event={"ID":"4e256274-18b7-47d4-97ca-73668f51579e","Type":"ContainerDied","Data":"27104aef79e27d1d8490f019935845fba49f37c8eb07a2950509fa3b03f02e97"} Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.671523 4893 scope.go:117] "RemoveContainer" containerID="357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.671700 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzb2" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.696819 4893 scope.go:117] "RemoveContainer" containerID="a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.706715 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.713520 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzb2"] Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.726264 4893 scope.go:117] "RemoveContainer" containerID="66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.749840 4893 scope.go:117] "RemoveContainer" containerID="357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af" Jan 21 07:52:48 crc kubenswrapper[4893]: E0121 07:52:48.750354 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af\": container with ID starting with 357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af not found: ID does not exist" containerID="357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.750406 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af"} err="failed to get container status \"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af\": rpc error: code = NotFound desc = could not find container \"357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af\": container with ID starting with 357c5e10ca146f6b92c54277213dcd6c7dde90f4eb441e1fe2e75eaa5f1c41af not found: ID does not exist" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.750435 4893 scope.go:117] "RemoveContainer" containerID="a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149" Jan 21 07:52:48 crc kubenswrapper[4893]: E0121 07:52:48.750716 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149\": container with ID starting with a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149 not found: ID does not exist" containerID="a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.750753 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149"} err="failed to get container status \"a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149\": rpc error: code = NotFound desc = could not find container \"a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149\": container with ID starting with a47d93a94bdd60d1e424341aa9ec2b3e4ba6c366dbbfe52c2d400801c8e8f149 not found: ID does not exist" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.750771 4893 scope.go:117] "RemoveContainer" containerID="66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64" Jan 21 07:52:48 crc kubenswrapper[4893]: E0121 07:52:48.751013 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64\": container with ID starting with 66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64 not found: ID does not exist" containerID="66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64" Jan 21 07:52:48 crc kubenswrapper[4893]: I0121 07:52:48.751037 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64"} err="failed to get container status \"66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64\": rpc error: code = NotFound desc = could not find container \"66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64\": container with ID starting with 66fdd50359d4c4a90ff1998d627b7a554a70d9ecab0b65a7544528d79d99ea64 not found: ID does not exist" Jan 21 07:52:49 crc kubenswrapper[4893]: I0121 07:52:49.589428 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e256274-18b7-47d4-97ca-73668f51579e" path="/var/lib/kubelet/pods/4e256274-18b7-47d4-97ca-73668f51579e/volumes" Jan 21 07:53:28 crc kubenswrapper[4893]: I0121 07:53:28.656835 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:53:28 crc kubenswrapper[4893]: I0121 07:53:28.657571 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:53:58 crc kubenswrapper[4893]: I0121 07:53:58.661935 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:53:58 crc kubenswrapper[4893]: I0121 07:53:58.662563 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:54:28 crc kubenswrapper[4893]: I0121 07:54:28.894892 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:54:28 crc kubenswrapper[4893]: I0121 07:54:28.895481 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:54:28 crc kubenswrapper[4893]: I0121 07:54:28.895554 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:54:28 crc kubenswrapper[4893]: I0121 07:54:28.896436 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:54:28 crc kubenswrapper[4893]: I0121 07:54:28.896544 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4" gracePeriod=600 Jan 21 07:54:29 crc kubenswrapper[4893]: I0121 07:54:29.923169 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4" exitCode=0 Jan 21 07:54:29 crc kubenswrapper[4893]: I0121 07:54:29.923856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4"} Jan 21 07:54:29 crc kubenswrapper[4893]: I0121 07:54:29.923902 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494"} Jan 21 07:54:29 crc kubenswrapper[4893]: I0121 07:54:29.923964 4893 scope.go:117] "RemoveContainer" containerID="7b5a1b1e5e5a61a7b83e0fd59b4e53c46188f9465cc3c4cd1c7706d0df8ead7e" Jan 21 07:56:58 crc kubenswrapper[4893]: I0121 07:56:58.657034 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:56:58 crc kubenswrapper[4893]: I0121 07:56:58.657555 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:57:28 crc kubenswrapper[4893]: I0121 07:57:28.656853 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:57:28 crc kubenswrapper[4893]: I0121 07:57:28.658716 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.271910 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:33 crc kubenswrapper[4893]: E0121 07:57:33.272849 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="extract-content" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.272895 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="extract-content" Jan 21 07:57:33 crc kubenswrapper[4893]: E0121 07:57:33.272933 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="registry-server" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.272944 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="registry-server" Jan 21 07:57:33 crc kubenswrapper[4893]: E0121 07:57:33.272983 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="extract-utilities" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.272995 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="extract-utilities" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.273352 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e256274-18b7-47d4-97ca-73668f51579e" containerName="registry-server" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.275314 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.299720 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.431630 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.431721 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.431751 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlplx\" (UniqueName: \"kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.533250 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.533328 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.533355 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlplx\" (UniqueName: \"kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.534238 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.534324 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.563015 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlplx\" (UniqueName: \"kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx\") pod \"redhat-operators-ksr5r\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:33 crc kubenswrapper[4893]: I0121 07:57:33.602111 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:34 crc kubenswrapper[4893]: I0121 07:57:34.149611 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:35 crc kubenswrapper[4893]: I0121 07:57:35.020316 4893 generic.go:334] "Generic (PLEG): container finished" podID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerID="05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746" exitCode=0 Jan 21 07:57:35 crc kubenswrapper[4893]: I0121 07:57:35.020543 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerDied","Data":"05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746"} Jan 21 07:57:35 crc kubenswrapper[4893]: I0121 07:57:35.020644 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerStarted","Data":"90afa97e09795113545d47b69341ad7c7444fd3172988bf94c315c978198fbbe"} Jan 21 07:57:37 crc kubenswrapper[4893]: I0121 07:57:37.039087 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerStarted","Data":"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c"} Jan 21 07:57:38 crc kubenswrapper[4893]: I0121 07:57:38.047264 4893 generic.go:334] "Generic (PLEG): container finished" podID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerID="03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c" exitCode=0 Jan 21 07:57:38 crc kubenswrapper[4893]: I0121 07:57:38.047451 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerDied","Data":"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c"} Jan 21 07:57:38 crc kubenswrapper[4893]: I0121 07:57:38.049847 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:57:39 crc kubenswrapper[4893]: I0121 07:57:39.071230 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerStarted","Data":"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4"} Jan 21 07:57:39 crc kubenswrapper[4893]: I0121 07:57:39.107664 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ksr5r" podStartSLOduration=2.7193483880000002 podStartE2EDuration="6.107572729s" podCreationTimestamp="2026-01-21 07:57:33 +0000 UTC" firstStartedPulling="2026-01-21 07:57:35.022704974 +0000 UTC m=+3796.253050876" lastFinishedPulling="2026-01-21 07:57:38.410929315 +0000 UTC m=+3799.641275217" observedRunningTime="2026-01-21 07:57:39.104013698 +0000 UTC m=+3800.334359600" watchObservedRunningTime="2026-01-21 07:57:39.107572729 +0000 UTC m=+3800.337918651" Jan 21 07:57:43 crc kubenswrapper[4893]: I0121 07:57:43.608119 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:43 crc kubenswrapper[4893]: I0121 07:57:43.608685 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:45 crc kubenswrapper[4893]: I0121 07:57:44.658865 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ksr5r" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="registry-server" probeResult="failure" output=< Jan 21 07:57:45 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 21 07:57:45 crc kubenswrapper[4893]: > Jan 21 07:57:53 crc kubenswrapper[4893]: I0121 07:57:53.685332 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:53 crc kubenswrapper[4893]: I0121 07:57:53.914515 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:53 crc kubenswrapper[4893]: I0121 07:57:53.959690 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:55 crc kubenswrapper[4893]: I0121 07:57:55.663990 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ksr5r" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="registry-server" containerID="cri-o://5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4" gracePeriod=2 Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.225182 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.386706 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlplx\" (UniqueName: \"kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx\") pod \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.387254 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content\") pod \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.387288 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities\") pod \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\" (UID: \"dea59ed6-c812-4c2d-b3c4-d024a2c5e481\") " Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.389264 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities" (OuterVolumeSpecName: "utilities") pod "dea59ed6-c812-4c2d-b3c4-d024a2c5e481" (UID: "dea59ed6-c812-4c2d-b3c4-d024a2c5e481"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.394947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx" (OuterVolumeSpecName: "kube-api-access-jlplx") pod "dea59ed6-c812-4c2d-b3c4-d024a2c5e481" (UID: "dea59ed6-c812-4c2d-b3c4-d024a2c5e481"). InnerVolumeSpecName "kube-api-access-jlplx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.488914 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.489251 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlplx\" (UniqueName: \"kubernetes.io/projected/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-kube-api-access-jlplx\") on node \"crc\" DevicePath \"\"" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.518811 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dea59ed6-c812-4c2d-b3c4-d024a2c5e481" (UID: "dea59ed6-c812-4c2d-b3c4-d024a2c5e481"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.590158 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea59ed6-c812-4c2d-b3c4-d024a2c5e481-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.676475 4893 generic.go:334] "Generic (PLEG): container finished" podID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerID="5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4" exitCode=0 Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.676540 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerDied","Data":"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4"} Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.676579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ksr5r" event={"ID":"dea59ed6-c812-4c2d-b3c4-d024a2c5e481","Type":"ContainerDied","Data":"90afa97e09795113545d47b69341ad7c7444fd3172988bf94c315c978198fbbe"} Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.676644 4893 scope.go:117] "RemoveContainer" containerID="5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.676918 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ksr5r" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.701740 4893 scope.go:117] "RemoveContainer" containerID="03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.722710 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.728228 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ksr5r"] Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.742059 4893 scope.go:117] "RemoveContainer" containerID="05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.766649 4893 scope.go:117] "RemoveContainer" containerID="5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4" Jan 21 07:57:56 crc kubenswrapper[4893]: E0121 07:57:56.767275 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4\": container with ID starting with 5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4 not found: ID does not exist" containerID="5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.767325 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4"} err="failed to get container status \"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4\": rpc error: code = NotFound desc = could not find container \"5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4\": container with ID starting with 5772b8015b0b3a5855751007d23db8e9b0a4ab6d81a801e3301f5b1a719ab8f4 not found: ID does not exist" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.767350 4893 scope.go:117] "RemoveContainer" containerID="03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c" Jan 21 07:57:56 crc kubenswrapper[4893]: E0121 07:57:56.767724 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c\": container with ID starting with 03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c not found: ID does not exist" containerID="03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.767748 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c"} err="failed to get container status \"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c\": rpc error: code = NotFound desc = could not find container \"03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c\": container with ID starting with 03a46901f5bb25e80b2f2f046d73b9c5174539bc069381ab2516c8168a6f470c not found: ID does not exist" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.767898 4893 scope.go:117] "RemoveContainer" containerID="05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746" Jan 21 07:57:56 crc kubenswrapper[4893]: E0121 07:57:56.768132 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746\": container with ID starting with 05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746 not found: ID does not exist" containerID="05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746" Jan 21 07:57:56 crc kubenswrapper[4893]: I0121 07:57:56.768156 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746"} err="failed to get container status \"05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746\": rpc error: code = NotFound desc = could not find container \"05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746\": container with ID starting with 05640a1eccb25f22deb868f7de728ca8bb3cee36d9ab41154268dcf860c6f746 not found: ID does not exist" Jan 21 07:57:57 crc kubenswrapper[4893]: I0121 07:57:57.590969 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" path="/var/lib/kubelet/pods/dea59ed6-c812-4c2d-b3c4-d024a2c5e481/volumes" Jan 21 07:57:58 crc kubenswrapper[4893]: I0121 07:57:58.656725 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:57:58 crc kubenswrapper[4893]: I0121 07:57:58.657105 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:57:58 crc kubenswrapper[4893]: I0121 07:57:58.657160 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 07:57:58 crc kubenswrapper[4893]: I0121 07:57:58.657974 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:57:58 crc kubenswrapper[4893]: I0121 07:57:58.658042 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" gracePeriod=600 Jan 21 07:57:59 crc kubenswrapper[4893]: I0121 07:57:59.710421 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" exitCode=0 Jan 21 07:57:59 crc kubenswrapper[4893]: I0121 07:57:59.710494 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494"} Jan 21 07:57:59 crc kubenswrapper[4893]: I0121 07:57:59.710547 4893 scope.go:117] "RemoveContainer" containerID="c769ecc0927f4f9a28ef7ba50fc01160f6908ee021f090e1daefc20ab4d334e4" Jan 21 07:58:00 crc kubenswrapper[4893]: E0121 07:58:00.830982 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:58:01 crc kubenswrapper[4893]: I0121 07:58:01.744623 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:58:01 crc kubenswrapper[4893]: E0121 07:58:01.746425 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:58:16 crc kubenswrapper[4893]: I0121 07:58:16.580763 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:58:16 crc kubenswrapper[4893]: E0121 07:58:16.581840 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:58:29 crc kubenswrapper[4893]: I0121 07:58:29.588135 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:58:29 crc kubenswrapper[4893]: E0121 07:58:29.589380 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:58:41 crc kubenswrapper[4893]: I0121 07:58:41.590688 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:58:41 crc kubenswrapper[4893]: E0121 07:58:41.591301 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:58:52 crc kubenswrapper[4893]: I0121 07:58:52.581390 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:58:52 crc kubenswrapper[4893]: E0121 07:58:52.582723 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:59:05 crc kubenswrapper[4893]: I0121 07:59:05.581893 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:59:05 crc kubenswrapper[4893]: E0121 07:59:05.582731 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:59:19 crc kubenswrapper[4893]: I0121 07:59:19.590036 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:59:19 crc kubenswrapper[4893]: E0121 07:59:19.591349 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:59:33 crc kubenswrapper[4893]: I0121 07:59:33.582006 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:59:33 crc kubenswrapper[4893]: E0121 07:59:33.582973 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:59:45 crc kubenswrapper[4893]: I0121 07:59:45.581170 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:59:45 crc kubenswrapper[4893]: E0121 07:59:45.582001 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.787069 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 07:59:49 crc kubenswrapper[4893]: E0121 07:59:49.787821 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="registry-server" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.787849 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="registry-server" Jan 21 07:59:49 crc kubenswrapper[4893]: E0121 07:59:49.787881 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="extract-content" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.787887 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="extract-content" Jan 21 07:59:49 crc kubenswrapper[4893]: E0121 07:59:49.787900 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="extract-utilities" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.787934 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="extract-utilities" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.788141 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea59ed6-c812-4c2d-b3c4-d024a2c5e481" containerName="registry-server" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.789324 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.827524 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.873980 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvxg2\" (UniqueName: \"kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.874074 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.874119 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.976175 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvxg2\" (UniqueName: \"kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.976270 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.976296 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.976957 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:49 crc kubenswrapper[4893]: I0121 07:59:49.976999 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:56 crc kubenswrapper[4893]: I0121 07:59:56.706375 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvxg2\" (UniqueName: \"kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2\") pod \"certified-operators-xzd9v\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:56 crc kubenswrapper[4893]: I0121 07:59:56.715796 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 07:59:57 crc kubenswrapper[4893]: I0121 07:59:57.208515 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 07:59:57 crc kubenswrapper[4893]: I0121 07:59:57.501267 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerStarted","Data":"55abd569443822d082ca076f596ef36966864f963e4d2c8b6f4cd318249eede7"} Jan 21 07:59:58 crc kubenswrapper[4893]: I0121 07:59:58.512196 4893 generic.go:334] "Generic (PLEG): container finished" podID="3b58cec0-88b4-4456-976b-97578116710c" containerID="e9ee8293cf7e90bcf1cb2db3f0ce1badaf406c6002da2231c2f279045a56f449" exitCode=0 Jan 21 07:59:58 crc kubenswrapper[4893]: I0121 07:59:58.512254 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerDied","Data":"e9ee8293cf7e90bcf1cb2db3f0ce1badaf406c6002da2231c2f279045a56f449"} Jan 21 07:59:59 crc kubenswrapper[4893]: I0121 07:59:59.524410 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerStarted","Data":"2718c32deac00814bd94c4c8cca974674138a88e5916e27b448e752b4d049bf2"} Jan 21 07:59:59 crc kubenswrapper[4893]: I0121 07:59:59.586463 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 07:59:59 crc kubenswrapper[4893]: E0121 07:59:59.587057 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.192641 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn"] Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.193977 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.196299 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.196971 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.206291 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn"] Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.337318 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.337894 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hjh\" (UniqueName: \"kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.338029 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.438965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.440131 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hjh\" (UniqueName: \"kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.440270 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.441356 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.457037 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.476108 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hjh\" (UniqueName: \"kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh\") pod \"collect-profiles-29483040-gjhdn\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.522719 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.532761 4893 generic.go:334] "Generic (PLEG): container finished" podID="3b58cec0-88b4-4456-976b-97578116710c" containerID="2718c32deac00814bd94c4c8cca974674138a88e5916e27b448e752b4d049bf2" exitCode=0 Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.532977 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerDied","Data":"2718c32deac00814bd94c4c8cca974674138a88e5916e27b448e752b4d049bf2"} Jan 21 08:00:00 crc kubenswrapper[4893]: I0121 08:00:00.945232 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn"] Jan 21 08:00:00 crc kubenswrapper[4893]: W0121 08:00:00.947076 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38fde911_3e04_4872_92e6_abbc9753c887.slice/crio-d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d WatchSource:0}: Error finding container d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d: Status 404 returned error can't find the container with id d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d Jan 21 08:00:01 crc kubenswrapper[4893]: I0121 08:00:01.542565 4893 generic.go:334] "Generic (PLEG): container finished" podID="38fde911-3e04-4872-92e6-abbc9753c887" containerID="324743a2c279939b8756cf4d93e2f4c282e77898b5d6d7cda86a72c3e1144576" exitCode=0 Jan 21 08:00:01 crc kubenswrapper[4893]: I0121 08:00:01.542718 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" event={"ID":"38fde911-3e04-4872-92e6-abbc9753c887","Type":"ContainerDied","Data":"324743a2c279939b8756cf4d93e2f4c282e77898b5d6d7cda86a72c3e1144576"} Jan 21 08:00:01 crc kubenswrapper[4893]: I0121 08:00:01.543015 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" event={"ID":"38fde911-3e04-4872-92e6-abbc9753c887","Type":"ContainerStarted","Data":"d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d"} Jan 21 08:00:01 crc kubenswrapper[4893]: I0121 08:00:01.547589 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerStarted","Data":"e21313f5f483ca6773c925cb3b59e37985dccad8c384e0e29c25ea51b622bd56"} Jan 21 08:00:01 crc kubenswrapper[4893]: I0121 08:00:01.595542 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xzd9v" podStartSLOduration=9.901834261 podStartE2EDuration="12.595508778s" podCreationTimestamp="2026-01-21 07:59:49 +0000 UTC" firstStartedPulling="2026-01-21 07:59:58.517530756 +0000 UTC m=+3939.747876658" lastFinishedPulling="2026-01-21 08:00:01.211205273 +0000 UTC m=+3942.441551175" observedRunningTime="2026-01-21 08:00:01.590565748 +0000 UTC m=+3942.820911650" watchObservedRunningTime="2026-01-21 08:00:01.595508778 +0000 UTC m=+3942.825854680" Jan 21 08:00:02 crc kubenswrapper[4893]: I0121 08:00:02.859731 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.017568 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume\") pod \"38fde911-3e04-4872-92e6-abbc9753c887\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.017628 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hjh\" (UniqueName: \"kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh\") pod \"38fde911-3e04-4872-92e6-abbc9753c887\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.017746 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume\") pod \"38fde911-3e04-4872-92e6-abbc9753c887\" (UID: \"38fde911-3e04-4872-92e6-abbc9753c887\") " Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.018779 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume" (OuterVolumeSpecName: "config-volume") pod "38fde911-3e04-4872-92e6-abbc9753c887" (UID: "38fde911-3e04-4872-92e6-abbc9753c887"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.025507 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh" (OuterVolumeSpecName: "kube-api-access-85hjh") pod "38fde911-3e04-4872-92e6-abbc9753c887" (UID: "38fde911-3e04-4872-92e6-abbc9753c887"). InnerVolumeSpecName "kube-api-access-85hjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.043057 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "38fde911-3e04-4872-92e6-abbc9753c887" (UID: "38fde911-3e04-4872-92e6-abbc9753c887"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.120601 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38fde911-3e04-4872-92e6-abbc9753c887-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.120650 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38fde911-3e04-4872-92e6-abbc9753c887-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.120698 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hjh\" (UniqueName: \"kubernetes.io/projected/38fde911-3e04-4872-92e6-abbc9753c887-kube-api-access-85hjh\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.565597 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" event={"ID":"38fde911-3e04-4872-92e6-abbc9753c887","Type":"ContainerDied","Data":"d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d"} Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.565651 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483040-gjhdn" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.565753 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8aae2e5363f31ceb61ac7923e551bcfc5a37af9b8c75a698e85e99ca02c206d" Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.981382 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9"] Jan 21 08:00:03 crc kubenswrapper[4893]: I0121 08:00:03.987611 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482995-kdkr9"] Jan 21 08:00:05 crc kubenswrapper[4893]: I0121 08:00:05.591551 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a75791-1765-4b28-81d7-9baddda40b7c" path="/var/lib/kubelet/pods/43a75791-1765-4b28-81d7-9baddda40b7c/volumes" Jan 21 08:00:06 crc kubenswrapper[4893]: I0121 08:00:06.717400 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:06 crc kubenswrapper[4893]: I0121 08:00:06.717619 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:06 crc kubenswrapper[4893]: I0121 08:00:06.777814 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:07 crc kubenswrapper[4893]: I0121 08:00:07.950121 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:08 crc kubenswrapper[4893]: I0121 08:00:08.002568 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 08:00:09 crc kubenswrapper[4893]: I0121 08:00:09.630411 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xzd9v" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="registry-server" containerID="cri-o://e21313f5f483ca6773c925cb3b59e37985dccad8c384e0e29c25ea51b622bd56" gracePeriod=2 Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.662924 4893 generic.go:334] "Generic (PLEG): container finished" podID="3b58cec0-88b4-4456-976b-97578116710c" containerID="e21313f5f483ca6773c925cb3b59e37985dccad8c384e0e29c25ea51b622bd56" exitCode=0 Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.663005 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerDied","Data":"e21313f5f483ca6773c925cb3b59e37985dccad8c384e0e29c25ea51b622bd56"} Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.795906 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.887259 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content\") pod \"3b58cec0-88b4-4456-976b-97578116710c\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.887341 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvxg2\" (UniqueName: \"kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2\") pod \"3b58cec0-88b4-4456-976b-97578116710c\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.887505 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities\") pod \"3b58cec0-88b4-4456-976b-97578116710c\" (UID: \"3b58cec0-88b4-4456-976b-97578116710c\") " Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.888883 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities" (OuterVolumeSpecName: "utilities") pod "3b58cec0-88b4-4456-976b-97578116710c" (UID: "3b58cec0-88b4-4456-976b-97578116710c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.893660 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2" (OuterVolumeSpecName: "kube-api-access-lvxg2") pod "3b58cec0-88b4-4456-976b-97578116710c" (UID: "3b58cec0-88b4-4456-976b-97578116710c"). InnerVolumeSpecName "kube-api-access-lvxg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.932184 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b58cec0-88b4-4456-976b-97578116710c" (UID: "3b58cec0-88b4-4456-976b-97578116710c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.989064 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.989107 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b58cec0-88b4-4456-976b-97578116710c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:12 crc kubenswrapper[4893]: I0121 08:00:12.989123 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvxg2\" (UniqueName: \"kubernetes.io/projected/3b58cec0-88b4-4456-976b-97578116710c-kube-api-access-lvxg2\") on node \"crc\" DevicePath \"\"" Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.675340 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzd9v" event={"ID":"3b58cec0-88b4-4456-976b-97578116710c","Type":"ContainerDied","Data":"55abd569443822d082ca076f596ef36966864f963e4d2c8b6f4cd318249eede7"} Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.675606 4893 scope.go:117] "RemoveContainer" containerID="e21313f5f483ca6773c925cb3b59e37985dccad8c384e0e29c25ea51b622bd56" Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.675496 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzd9v" Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.707205 4893 scope.go:117] "RemoveContainer" containerID="2718c32deac00814bd94c4c8cca974674138a88e5916e27b448e752b4d049bf2" Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.733165 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.738616 4893 scope.go:117] "RemoveContainer" containerID="e9ee8293cf7e90bcf1cb2db3f0ce1badaf406c6002da2231c2f279045a56f449" Jan 21 08:00:13 crc kubenswrapper[4893]: I0121 08:00:13.744733 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xzd9v"] Jan 21 08:00:13 crc kubenswrapper[4893]: E0121 08:00:13.763751 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b58cec0_88b4_4456_976b_97578116710c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b58cec0_88b4_4456_976b_97578116710c.slice/crio-55abd569443822d082ca076f596ef36966864f963e4d2c8b6f4cd318249eede7\": RecentStats: unable to find data in memory cache]" Jan 21 08:00:14 crc kubenswrapper[4893]: I0121 08:00:14.582177 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:00:14 crc kubenswrapper[4893]: E0121 08:00:14.582650 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:00:15 crc kubenswrapper[4893]: I0121 08:00:15.598471 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b58cec0-88b4-4456-976b-97578116710c" path="/var/lib/kubelet/pods/3b58cec0-88b4-4456-976b-97578116710c/volumes" Jan 21 08:00:29 crc kubenswrapper[4893]: I0121 08:00:29.586577 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:00:29 crc kubenswrapper[4893]: E0121 08:00:29.587409 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:00:36 crc kubenswrapper[4893]: I0121 08:00:36.550777 4893 scope.go:117] "RemoveContainer" containerID="92dc6e4c20c00232792f3a8eb0d27902f12d85fb95fbcb1feac828c5bceb0925" Jan 21 08:00:42 crc kubenswrapper[4893]: I0121 08:00:42.581318 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:00:42 crc kubenswrapper[4893]: E0121 08:00:42.582040 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:00:55 crc kubenswrapper[4893]: I0121 08:00:55.581467 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:00:55 crc kubenswrapper[4893]: E0121 08:00:55.582217 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:01:07 crc kubenswrapper[4893]: I0121 08:01:07.581425 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:01:07 crc kubenswrapper[4893]: E0121 08:01:07.582297 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:01:20 crc kubenswrapper[4893]: I0121 08:01:20.581047 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:01:20 crc kubenswrapper[4893]: E0121 08:01:20.581922 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:01:31 crc kubenswrapper[4893]: I0121 08:01:31.582081 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:01:31 crc kubenswrapper[4893]: E0121 08:01:31.582830 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.065887 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:34 crc kubenswrapper[4893]: E0121 08:01:34.066361 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="extract-content" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066380 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="extract-content" Jan 21 08:01:34 crc kubenswrapper[4893]: E0121 08:01:34.066406 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="registry-server" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066416 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="registry-server" Jan 21 08:01:34 crc kubenswrapper[4893]: E0121 08:01:34.066438 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fde911-3e04-4872-92e6-abbc9753c887" containerName="collect-profiles" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066450 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fde911-3e04-4872-92e6-abbc9753c887" containerName="collect-profiles" Jan 21 08:01:34 crc kubenswrapper[4893]: E0121 08:01:34.066470 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="extract-utilities" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066479 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="extract-utilities" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066652 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b58cec0-88b4-4456-976b-97578116710c" containerName="registry-server" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.066702 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fde911-3e04-4872-92e6-abbc9753c887" containerName="collect-profiles" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.068005 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.080895 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.155374 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.155456 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.155506 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82nw\" (UniqueName: \"kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.257016 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.257128 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.257178 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q82nw\" (UniqueName: \"kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.257706 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.257757 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.278290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q82nw\" (UniqueName: \"kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw\") pod \"community-operators-snjbv\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.390149 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:34 crc kubenswrapper[4893]: I0121 08:01:34.806037 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:35 crc kubenswrapper[4893]: E0121 08:01:35.463445 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643747e3_f11a_4471_9b6a_c6a7bc2fc94f.slice/crio-675be1dd5c28560ac880823c022e01dc1a67a9be898939b0f967e84afdca7e07.scope\": RecentStats: unable to find data in memory cache]" Jan 21 08:01:35 crc kubenswrapper[4893]: I0121 08:01:35.644537 4893 generic.go:334] "Generic (PLEG): container finished" podID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerID="675be1dd5c28560ac880823c022e01dc1a67a9be898939b0f967e84afdca7e07" exitCode=0 Jan 21 08:01:35 crc kubenswrapper[4893]: I0121 08:01:35.644601 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerDied","Data":"675be1dd5c28560ac880823c022e01dc1a67a9be898939b0f967e84afdca7e07"} Jan 21 08:01:35 crc kubenswrapper[4893]: I0121 08:01:35.644636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerStarted","Data":"64e67f904e59f9bc64199f7e32b00848328365143f41fa98bb07d9fc62ab72ea"} Jan 21 08:01:38 crc kubenswrapper[4893]: I0121 08:01:38.666654 4893 generic.go:334] "Generic (PLEG): container finished" podID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerID="696fd878c58166ccdecca8db12b82d524887624171a64f867f6200bc3c7319a9" exitCode=0 Jan 21 08:01:38 crc kubenswrapper[4893]: I0121 08:01:38.666906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerDied","Data":"696fd878c58166ccdecca8db12b82d524887624171a64f867f6200bc3c7319a9"} Jan 21 08:01:39 crc kubenswrapper[4893]: I0121 08:01:39.676315 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerStarted","Data":"a8ebc5c7cf32d6e01341c7283fdb5ad6edc07fd4d4fa53d6f15b8d19e46b2e72"} Jan 21 08:01:44 crc kubenswrapper[4893]: I0121 08:01:44.390755 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:44 crc kubenswrapper[4893]: I0121 08:01:44.391377 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:44 crc kubenswrapper[4893]: I0121 08:01:44.610136 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:44 crc kubenswrapper[4893]: I0121 08:01:44.660911 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snjbv" podStartSLOduration=7.202036244 podStartE2EDuration="10.660881433s" podCreationTimestamp="2026-01-21 08:01:34 +0000 UTC" firstStartedPulling="2026-01-21 08:01:35.646987713 +0000 UTC m=+4036.877333615" lastFinishedPulling="2026-01-21 08:01:39.105832902 +0000 UTC m=+4040.336178804" observedRunningTime="2026-01-21 08:01:39.699433387 +0000 UTC m=+4040.929779289" watchObservedRunningTime="2026-01-21 08:01:44.660881433 +0000 UTC m=+4045.891227335" Jan 21 08:01:44 crc kubenswrapper[4893]: I0121 08:01:44.979418 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:45 crc kubenswrapper[4893]: I0121 08:01:45.027086 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:46 crc kubenswrapper[4893]: I0121 08:01:46.581655 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:01:46 crc kubenswrapper[4893]: E0121 08:01:46.581976 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:01:46 crc kubenswrapper[4893]: I0121 08:01:46.936402 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-snjbv" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="registry-server" containerID="cri-o://a8ebc5c7cf32d6e01341c7283fdb5ad6edc07fd4d4fa53d6f15b8d19e46b2e72" gracePeriod=2 Jan 21 08:01:47 crc kubenswrapper[4893]: I0121 08:01:47.950992 4893 generic.go:334] "Generic (PLEG): container finished" podID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerID="a8ebc5c7cf32d6e01341c7283fdb5ad6edc07fd4d4fa53d6f15b8d19e46b2e72" exitCode=0 Jan 21 08:01:47 crc kubenswrapper[4893]: I0121 08:01:47.951059 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerDied","Data":"a8ebc5c7cf32d6e01341c7283fdb5ad6edc07fd4d4fa53d6f15b8d19e46b2e72"} Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.060183 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.135096 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content\") pod \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.135182 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q82nw\" (UniqueName: \"kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw\") pod \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.135225 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities\") pod \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\" (UID: \"643747e3-f11a-4471-9b6a-c6a7bc2fc94f\") " Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.136222 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities" (OuterVolumeSpecName: "utilities") pod "643747e3-f11a-4471-9b6a-c6a7bc2fc94f" (UID: "643747e3-f11a-4471-9b6a-c6a7bc2fc94f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.146020 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw" (OuterVolumeSpecName: "kube-api-access-q82nw") pod "643747e3-f11a-4471-9b6a-c6a7bc2fc94f" (UID: "643747e3-f11a-4471-9b6a-c6a7bc2fc94f"). InnerVolumeSpecName "kube-api-access-q82nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.191275 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "643747e3-f11a-4471-9b6a-c6a7bc2fc94f" (UID: "643747e3-f11a-4471-9b6a-c6a7bc2fc94f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.237973 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.238042 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q82nw\" (UniqueName: \"kubernetes.io/projected/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-kube-api-access-q82nw\") on node \"crc\" DevicePath \"\"" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.238078 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/643747e3-f11a-4471-9b6a-c6a7bc2fc94f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.963367 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snjbv" event={"ID":"643747e3-f11a-4471-9b6a-c6a7bc2fc94f","Type":"ContainerDied","Data":"64e67f904e59f9bc64199f7e32b00848328365143f41fa98bb07d9fc62ab72ea"} Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.964500 4893 scope.go:117] "RemoveContainer" containerID="a8ebc5c7cf32d6e01341c7283fdb5ad6edc07fd4d4fa53d6f15b8d19e46b2e72" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.963459 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snjbv" Jan 21 08:01:48 crc kubenswrapper[4893]: I0121 08:01:48.991937 4893 scope.go:117] "RemoveContainer" containerID="696fd878c58166ccdecca8db12b82d524887624171a64f867f6200bc3c7319a9" Jan 21 08:01:49 crc kubenswrapper[4893]: I0121 08:01:49.017305 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:49 crc kubenswrapper[4893]: I0121 08:01:49.025019 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-snjbv"] Jan 21 08:01:49 crc kubenswrapper[4893]: I0121 08:01:49.321603 4893 scope.go:117] "RemoveContainer" containerID="675be1dd5c28560ac880823c022e01dc1a67a9be898939b0f967e84afdca7e07" Jan 21 08:01:49 crc kubenswrapper[4893]: I0121 08:01:49.595246 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" path="/var/lib/kubelet/pods/643747e3-f11a-4471-9b6a-c6a7bc2fc94f/volumes" Jan 21 08:02:01 crc kubenswrapper[4893]: I0121 08:02:01.580855 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:02:01 crc kubenswrapper[4893]: E0121 08:02:01.581371 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:02:12 crc kubenswrapper[4893]: I0121 08:02:12.580704 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:02:12 crc kubenswrapper[4893]: E0121 08:02:12.581387 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:02:23 crc kubenswrapper[4893]: I0121 08:02:23.581827 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:02:23 crc kubenswrapper[4893]: E0121 08:02:23.582811 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:02:38 crc kubenswrapper[4893]: I0121 08:02:38.615930 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:02:38 crc kubenswrapper[4893]: E0121 08:02:38.616657 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:02:51 crc kubenswrapper[4893]: I0121 08:02:51.583529 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:02:51 crc kubenswrapper[4893]: E0121 08:02:51.584899 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:03:05 crc kubenswrapper[4893]: I0121 08:03:05.584203 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:03:06 crc kubenswrapper[4893]: I0121 08:03:06.287560 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b"} Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.128446 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:34 crc kubenswrapper[4893]: E0121 08:03:34.129405 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="extract-utilities" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.129445 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="extract-utilities" Jan 21 08:03:34 crc kubenswrapper[4893]: E0121 08:03:34.129485 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="registry-server" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.129494 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="registry-server" Jan 21 08:03:34 crc kubenswrapper[4893]: E0121 08:03:34.129511 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="extract-content" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.129519 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="extract-content" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.129775 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="643747e3-f11a-4471-9b6a-c6a7bc2fc94f" containerName="registry-server" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.131137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.163082 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.234100 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg72c\" (UniqueName: \"kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.234200 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.234308 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.336811 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg72c\" (UniqueName: \"kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.336913 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.336969 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.337715 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.338291 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.396291 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg72c\" (UniqueName: \"kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c\") pod \"redhat-marketplace-9qsr4\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.459553 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:34 crc kubenswrapper[4893]: I0121 08:03:34.736585 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:34 crc kubenswrapper[4893]: W0121 08:03:34.743432 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e51116_de23_4129_95fd_06c3b0eac154.slice/crio-05c2b7225d30931e74d725efd52cbadd63f41687f84bb7d7bd166f840a035ba3 WatchSource:0}: Error finding container 05c2b7225d30931e74d725efd52cbadd63f41687f84bb7d7bd166f840a035ba3: Status 404 returned error can't find the container with id 05c2b7225d30931e74d725efd52cbadd63f41687f84bb7d7bd166f840a035ba3 Jan 21 08:03:35 crc kubenswrapper[4893]: I0121 08:03:35.594776 4893 generic.go:334] "Generic (PLEG): container finished" podID="f1e51116-de23-4129-95fd-06c3b0eac154" containerID="a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237" exitCode=0 Jan 21 08:03:35 crc kubenswrapper[4893]: I0121 08:03:35.596869 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 08:03:35 crc kubenswrapper[4893]: I0121 08:03:35.602326 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerDied","Data":"a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237"} Jan 21 08:03:35 crc kubenswrapper[4893]: I0121 08:03:35.602395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerStarted","Data":"05c2b7225d30931e74d725efd52cbadd63f41687f84bb7d7bd166f840a035ba3"} Jan 21 08:03:36 crc kubenswrapper[4893]: I0121 08:03:36.602191 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerStarted","Data":"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43"} Jan 21 08:03:37 crc kubenswrapper[4893]: I0121 08:03:37.705107 4893 generic.go:334] "Generic (PLEG): container finished" podID="f1e51116-de23-4129-95fd-06c3b0eac154" containerID="21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43" exitCode=0 Jan 21 08:03:37 crc kubenswrapper[4893]: I0121 08:03:37.705171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerDied","Data":"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43"} Jan 21 08:03:38 crc kubenswrapper[4893]: I0121 08:03:38.713446 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerStarted","Data":"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7"} Jan 21 08:03:38 crc kubenswrapper[4893]: I0121 08:03:38.855041 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qsr4" podStartSLOduration=2.213301485 podStartE2EDuration="4.855016371s" podCreationTimestamp="2026-01-21 08:03:34 +0000 UTC" firstStartedPulling="2026-01-21 08:03:35.596478933 +0000 UTC m=+4156.826824835" lastFinishedPulling="2026-01-21 08:03:38.238193819 +0000 UTC m=+4159.468539721" observedRunningTime="2026-01-21 08:03:38.852589393 +0000 UTC m=+4160.082935295" watchObservedRunningTime="2026-01-21 08:03:38.855016371 +0000 UTC m=+4160.085362273" Jan 21 08:03:44 crc kubenswrapper[4893]: I0121 08:03:44.459855 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:44 crc kubenswrapper[4893]: I0121 08:03:44.460387 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:44 crc kubenswrapper[4893]: I0121 08:03:44.516967 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:44 crc kubenswrapper[4893]: I0121 08:03:44.823359 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:44 crc kubenswrapper[4893]: I0121 08:03:44.889484 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:46 crc kubenswrapper[4893]: I0121 08:03:46.782952 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qsr4" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="registry-server" containerID="cri-o://f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7" gracePeriod=2 Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.210501 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.326582 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities\") pod \"f1e51116-de23-4129-95fd-06c3b0eac154\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.326719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg72c\" (UniqueName: \"kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c\") pod \"f1e51116-de23-4129-95fd-06c3b0eac154\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.326808 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content\") pod \"f1e51116-de23-4129-95fd-06c3b0eac154\" (UID: \"f1e51116-de23-4129-95fd-06c3b0eac154\") " Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.420958 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities" (OuterVolumeSpecName: "utilities") pod "f1e51116-de23-4129-95fd-06c3b0eac154" (UID: "f1e51116-de23-4129-95fd-06c3b0eac154"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.421077 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c" (OuterVolumeSpecName: "kube-api-access-mg72c") pod "f1e51116-de23-4129-95fd-06c3b0eac154" (UID: "f1e51116-de23-4129-95fd-06c3b0eac154"). InnerVolumeSpecName "kube-api-access-mg72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.428384 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg72c\" (UniqueName: \"kubernetes.io/projected/f1e51116-de23-4129-95fd-06c3b0eac154-kube-api-access-mg72c\") on node \"crc\" DevicePath \"\"" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.428420 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.441797 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1e51116-de23-4129-95fd-06c3b0eac154" (UID: "f1e51116-de23-4129-95fd-06c3b0eac154"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.529983 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e51116-de23-4129-95fd-06c3b0eac154-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.794790 4893 generic.go:334] "Generic (PLEG): container finished" podID="f1e51116-de23-4129-95fd-06c3b0eac154" containerID="f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7" exitCode=0 Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.794856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerDied","Data":"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7"} Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.794908 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qsr4" event={"ID":"f1e51116-de23-4129-95fd-06c3b0eac154","Type":"ContainerDied","Data":"05c2b7225d30931e74d725efd52cbadd63f41687f84bb7d7bd166f840a035ba3"} Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.794911 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qsr4" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.794936 4893 scope.go:117] "RemoveContainer" containerID="f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.823547 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.829182 4893 scope.go:117] "RemoveContainer" containerID="21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.834175 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qsr4"] Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.845660 4893 scope.go:117] "RemoveContainer" containerID="a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.883794 4893 scope.go:117] "RemoveContainer" containerID="f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7" Jan 21 08:03:47 crc kubenswrapper[4893]: E0121 08:03:47.884505 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7\": container with ID starting with f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7 not found: ID does not exist" containerID="f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.884585 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7"} err="failed to get container status \"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7\": rpc error: code = NotFound desc = could not find container \"f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7\": container with ID starting with f90141e0a40b0f49724f2975d9c8f54896497af55c299730a8b3898d594137b7 not found: ID does not exist" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.884631 4893 scope.go:117] "RemoveContainer" containerID="21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43" Jan 21 08:03:47 crc kubenswrapper[4893]: E0121 08:03:47.885157 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43\": container with ID starting with 21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43 not found: ID does not exist" containerID="21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.885196 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43"} err="failed to get container status \"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43\": rpc error: code = NotFound desc = could not find container \"21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43\": container with ID starting with 21e5e723596f4404d1c61166b300334d44278949ac304512619e401fb6729d43 not found: ID does not exist" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.885231 4893 scope.go:117] "RemoveContainer" containerID="a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237" Jan 21 08:03:47 crc kubenswrapper[4893]: E0121 08:03:47.885541 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237\": container with ID starting with a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237 not found: ID does not exist" containerID="a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237" Jan 21 08:03:47 crc kubenswrapper[4893]: I0121 08:03:47.885564 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237"} err="failed to get container status \"a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237\": rpc error: code = NotFound desc = could not find container \"a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237\": container with ID starting with a69c56239f911522a1e89525260361f51a966658acd5a1b08f0e59bf1de11237 not found: ID does not exist" Jan 21 08:03:49 crc kubenswrapper[4893]: I0121 08:03:49.595704 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" path="/var/lib/kubelet/pods/f1e51116-de23-4129-95fd-06c3b0eac154/volumes" Jan 21 08:05:28 crc kubenswrapper[4893]: I0121 08:05:28.656910 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:05:28 crc kubenswrapper[4893]: I0121 08:05:28.657430 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:05:58 crc kubenswrapper[4893]: I0121 08:05:58.656521 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:05:58 crc kubenswrapper[4893]: I0121 08:05:58.657615 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.656895 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.657587 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.657699 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.658482 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.658560 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b" gracePeriod=600 Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.816261 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b" exitCode=0 Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.816621 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b"} Jan 21 08:06:28 crc kubenswrapper[4893]: I0121 08:06:28.816814 4893 scope.go:117] "RemoveContainer" containerID="3a29a42815f4797738f166cd1b041d77da5e111165ef9f7740649e18b4757494" Jan 21 08:06:29 crc kubenswrapper[4893]: I0121 08:06:29.827240 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerStarted","Data":"0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9"} Jan 21 08:08:58 crc kubenswrapper[4893]: I0121 08:08:58.657331 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:08:58 crc kubenswrapper[4893]: I0121 08:08:58.657996 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:09:28 crc kubenswrapper[4893]: I0121 08:09:28.656654 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:09:28 crc kubenswrapper[4893]: I0121 08:09:28.657359 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.875432 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:09:48 crc kubenswrapper[4893]: E0121 08:09:48.876339 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="extract-content" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.876352 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="extract-content" Jan 21 08:09:48 crc kubenswrapper[4893]: E0121 08:09:48.876386 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="extract-utilities" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.876392 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="extract-utilities" Jan 21 08:09:48 crc kubenswrapper[4893]: E0121 08:09:48.876400 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="registry-server" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.876407 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="registry-server" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.876628 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e51116-de23-4129-95fd-06c3b0eac154" containerName="registry-server" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.878171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.884980 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.973756 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.973817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vztv\" (UniqueName: \"kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:48 crc kubenswrapper[4893]: I0121 08:09:48.973929 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.075088 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.075151 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vztv\" (UniqueName: \"kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.075244 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.075727 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.075783 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.101925 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vztv\" (UniqueName: \"kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv\") pod \"redhat-operators-fbkn7\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.219010 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:49 crc kubenswrapper[4893]: I0121 08:09:49.463851 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:09:50 crc kubenswrapper[4893]: I0121 08:09:50.364857 4893 generic.go:334] "Generic (PLEG): container finished" podID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerID="dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026" exitCode=0 Jan 21 08:09:50 crc kubenswrapper[4893]: I0121 08:09:50.365189 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerDied","Data":"dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026"} Jan 21 08:09:50 crc kubenswrapper[4893]: I0121 08:09:50.365222 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerStarted","Data":"b32e550c7c0173d7df58d9dd08004285050a21259eed2e56cdbbd963ba98703a"} Jan 21 08:09:50 crc kubenswrapper[4893]: I0121 08:09:50.367380 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 08:09:51 crc kubenswrapper[4893]: I0121 08:09:51.375431 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerStarted","Data":"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465"} Jan 21 08:09:52 crc kubenswrapper[4893]: I0121 08:09:52.384733 4893 generic.go:334] "Generic (PLEG): container finished" podID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerID="3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465" exitCode=0 Jan 21 08:09:52 crc kubenswrapper[4893]: I0121 08:09:52.384799 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerDied","Data":"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465"} Jan 21 08:09:53 crc kubenswrapper[4893]: I0121 08:09:53.398364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerStarted","Data":"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca"} Jan 21 08:09:53 crc kubenswrapper[4893]: I0121 08:09:53.418385 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbkn7" podStartSLOduration=2.798791272 podStartE2EDuration="5.418363419s" podCreationTimestamp="2026-01-21 08:09:48 +0000 UTC" firstStartedPulling="2026-01-21 08:09:50.367131416 +0000 UTC m=+4531.597477308" lastFinishedPulling="2026-01-21 08:09:52.986703553 +0000 UTC m=+4534.217049455" observedRunningTime="2026-01-21 08:09:53.414039748 +0000 UTC m=+4534.644385660" watchObservedRunningTime="2026-01-21 08:09:53.418363419 +0000 UTC m=+4534.648709321" Jan 21 08:09:58 crc kubenswrapper[4893]: I0121 08:09:58.656558 4893 patch_prober.go:28] interesting pod/machine-config-daemon-hg78p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 08:09:58 crc kubenswrapper[4893]: I0121 08:09:58.657107 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 08:09:58 crc kubenswrapper[4893]: I0121 08:09:58.657173 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" Jan 21 08:09:58 crc kubenswrapper[4893]: I0121 08:09:58.658013 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9"} pod="openshift-machine-config-operator/machine-config-daemon-hg78p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 08:09:58 crc kubenswrapper[4893]: I0121 08:09:58.658121 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerName="machine-config-daemon" containerID="cri-o://0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" gracePeriod=600 Jan 21 08:09:58 crc kubenswrapper[4893]: E0121 08:09:58.797584 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.219949 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.220062 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.274888 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.448615 4893 generic.go:334] "Generic (PLEG): container finished" podID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" exitCode=0 Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.448724 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" event={"ID":"ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a","Type":"ContainerDied","Data":"0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9"} Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.449037 4893 scope.go:117] "RemoveContainer" containerID="0479fac520ff870acbdf68359843274c2821e0b4438b7a32d13a8f0aa760359b" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.450227 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:09:59 crc kubenswrapper[4893]: E0121 08:09:59.450508 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.525367 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:09:59 crc kubenswrapper[4893]: I0121 08:09:59.589320 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:10:01 crc kubenswrapper[4893]: I0121 08:10:01.472741 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbkn7" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="registry-server" containerID="cri-o://5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca" gracePeriod=2 Jan 21 08:10:01 crc kubenswrapper[4893]: I0121 08:10:01.879272 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.041116 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vztv\" (UniqueName: \"kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv\") pod \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.041187 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content\") pod \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.041244 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities\") pod \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\" (UID: \"bdadeb7e-f245-4e20-a49e-a598d23cbcb1\") " Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.046617 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities" (OuterVolumeSpecName: "utilities") pod "bdadeb7e-f245-4e20-a49e-a598d23cbcb1" (UID: "bdadeb7e-f245-4e20-a49e-a598d23cbcb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.047988 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv" (OuterVolumeSpecName: "kube-api-access-4vztv") pod "bdadeb7e-f245-4e20-a49e-a598d23cbcb1" (UID: "bdadeb7e-f245-4e20-a49e-a598d23cbcb1"). InnerVolumeSpecName "kube-api-access-4vztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.143427 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vztv\" (UniqueName: \"kubernetes.io/projected/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-kube-api-access-4vztv\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.143481 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.483935 4893 generic.go:334] "Generic (PLEG): container finished" podID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerID="5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca" exitCode=0 Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.484001 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerDied","Data":"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca"} Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.484331 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbkn7" event={"ID":"bdadeb7e-f245-4e20-a49e-a598d23cbcb1","Type":"ContainerDied","Data":"b32e550c7c0173d7df58d9dd08004285050a21259eed2e56cdbbd963ba98703a"} Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.484371 4893 scope.go:117] "RemoveContainer" containerID="5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.484019 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbkn7" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.509243 4893 scope.go:117] "RemoveContainer" containerID="3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.537718 4893 scope.go:117] "RemoveContainer" containerID="dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.583466 4893 scope.go:117] "RemoveContainer" containerID="5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca" Jan 21 08:10:02 crc kubenswrapper[4893]: E0121 08:10:02.584165 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca\": container with ID starting with 5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca not found: ID does not exist" containerID="5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.584210 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca"} err="failed to get container status \"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca\": rpc error: code = NotFound desc = could not find container \"5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca\": container with ID starting with 5713f7b76c4ba69debf088d97fe187aadb7b854598f82ee06faa46ca0804c6ca not found: ID does not exist" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.584240 4893 scope.go:117] "RemoveContainer" containerID="3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465" Jan 21 08:10:02 crc kubenswrapper[4893]: E0121 08:10:02.584734 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465\": container with ID starting with 3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465 not found: ID does not exist" containerID="3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.584757 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465"} err="failed to get container status \"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465\": rpc error: code = NotFound desc = could not find container \"3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465\": container with ID starting with 3df9bffdcfaf15717cde292bc8a32ca9cc0cae3be67eb7d0cd81c00d93bb2465 not found: ID does not exist" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.584771 4893 scope.go:117] "RemoveContainer" containerID="dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026" Jan 21 08:10:02 crc kubenswrapper[4893]: E0121 08:10:02.585114 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026\": container with ID starting with dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026 not found: ID does not exist" containerID="dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026" Jan 21 08:10:02 crc kubenswrapper[4893]: I0121 08:10:02.585241 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026"} err="failed to get container status \"dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026\": rpc error: code = NotFound desc = could not find container \"dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026\": container with ID starting with dd0f32bb49b867103bb255084eb7e561a8e749188431b0b55073587cdf604026 not found: ID does not exist" Jan 21 08:10:03 crc kubenswrapper[4893]: I0121 08:10:03.081078 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdadeb7e-f245-4e20-a49e-a598d23cbcb1" (UID: "bdadeb7e-f245-4e20-a49e-a598d23cbcb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:10:03 crc kubenswrapper[4893]: I0121 08:10:03.118005 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:10:03 crc kubenswrapper[4893]: I0121 08:10:03.123090 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbkn7"] Jan 21 08:10:03 crc kubenswrapper[4893]: I0121 08:10:03.144696 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdadeb7e-f245-4e20-a49e-a598d23cbcb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:03 crc kubenswrapper[4893]: I0121 08:10:03.590307 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" path="/var/lib/kubelet/pods/bdadeb7e-f245-4e20-a49e-a598d23cbcb1/volumes" Jan 21 08:10:14 crc kubenswrapper[4893]: I0121 08:10:14.581130 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:10:14 crc kubenswrapper[4893]: E0121 08:10:14.582172 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:10:17 crc kubenswrapper[4893]: I0121 08:10:17.619986 4893 generic.go:334] "Generic (PLEG): container finished" podID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerID="7574bd9c2a80da512156d60871d9091948555fd2f6ff1ace6109a668e3ba14ab" exitCode=0 Jan 21 08:10:17 crc kubenswrapper[4893]: I0121 08:10:17.620221 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" event={"ID":"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc","Type":"ContainerDied","Data":"7574bd9c2a80da512156d60871d9091948555fd2f6ff1ace6109a668e3ba14ab"} Jan 21 08:10:17 crc kubenswrapper[4893]: I0121 08:10:17.621126 4893 scope.go:117] "RemoveContainer" containerID="7574bd9c2a80da512156d60871d9091948555fd2f6ff1ace6109a668e3ba14ab" Jan 21 08:10:18 crc kubenswrapper[4893]: I0121 08:10:18.320943 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rvg8p_must-gather-rvzxg_f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc/gather/0.log" Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.479499 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rvg8p/must-gather-rvzxg"] Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.480173 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="copy" containerID="cri-o://14c9613f22c58e05f1be22173b7c6c87a6ceb45d5e11fb7a2571c21ac3564120" gracePeriod=2 Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.487767 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rvg8p/must-gather-rvzxg"] Jan 21 08:10:26 crc kubenswrapper[4893]: E0121 08:10:26.717015 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7abf9af_0ec4_4b2c_aa9d_4d37babfb5bc.slice/crio-conmon-14c9613f22c58e05f1be22173b7c6c87a6ceb45d5e11fb7a2571c21ac3564120.scope\": RecentStats: unable to find data in memory cache]" Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.728588 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rvg8p_must-gather-rvzxg_f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc/copy/0.log" Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.730198 4893 generic.go:334] "Generic (PLEG): container finished" podID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerID="14c9613f22c58e05f1be22173b7c6c87a6ceb45d5e11fb7a2571c21ac3564120" exitCode=143 Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.854356 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rvg8p_must-gather-rvzxg_f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc/copy/0.log" Jan 21 08:10:26 crc kubenswrapper[4893]: I0121 08:10:26.854772 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.028664 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7hvk\" (UniqueName: \"kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk\") pod \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.029763 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output\") pod \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\" (UID: \"f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc\") " Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.037930 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk" (OuterVolumeSpecName: "kube-api-access-g7hvk") pod "f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" (UID: "f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc"). InnerVolumeSpecName "kube-api-access-g7hvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.049495 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7hvk\" (UniqueName: \"kubernetes.io/projected/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-kube-api-access-g7hvk\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.154381 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" (UID: "f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.253421 4893 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.581540 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:10:27 crc kubenswrapper[4893]: E0121 08:10:27.582145 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.592878 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" path="/var/lib/kubelet/pods/f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc/volumes" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.738552 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rvg8p_must-gather-rvzxg_f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc/copy/0.log" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.738985 4893 scope.go:117] "RemoveContainer" containerID="14c9613f22c58e05f1be22173b7c6c87a6ceb45d5e11fb7a2571c21ac3564120" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.739104 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rvg8p/must-gather-rvzxg" Jan 21 08:10:27 crc kubenswrapper[4893]: I0121 08:10:27.761496 4893 scope.go:117] "RemoveContainer" containerID="7574bd9c2a80da512156d60871d9091948555fd2f6ff1ace6109a668e3ba14ab" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.218607 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:34 crc kubenswrapper[4893]: E0121 08:10:34.219465 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="extract-content" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219479 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="extract-content" Jan 21 08:10:34 crc kubenswrapper[4893]: E0121 08:10:34.219494 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="registry-server" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219500 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="registry-server" Jan 21 08:10:34 crc kubenswrapper[4893]: E0121 08:10:34.219508 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="copy" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219514 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="copy" Jan 21 08:10:34 crc kubenswrapper[4893]: E0121 08:10:34.219526 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="gather" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219531 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="gather" Jan 21 08:10:34 crc kubenswrapper[4893]: E0121 08:10:34.219556 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="extract-utilities" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219564 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="extract-utilities" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219730 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="gather" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219743 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7abf9af-0ec4-4b2c-aa9d-4d37babfb5bc" containerName="copy" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.219758 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdadeb7e-f245-4e20-a49e-a598d23cbcb1" containerName="registry-server" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.220944 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.266620 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.362029 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.362388 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.362628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccd49\" (UniqueName: \"kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.464262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.464908 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.465595 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.465583 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.465931 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccd49\" (UniqueName: \"kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.485193 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccd49\" (UniqueName: \"kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49\") pod \"certified-operators-rhpp2\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:34 crc kubenswrapper[4893]: I0121 08:10:34.538151 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:35 crc kubenswrapper[4893]: I0121 08:10:35.009091 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:35 crc kubenswrapper[4893]: I0121 08:10:35.806419 4893 generic.go:334] "Generic (PLEG): container finished" podID="669b74d8-0631-4ed2-8fb4-207a92178348" containerID="93091eab2e311ee5ab3eaea1e3376e10ccf2f08163ec80edb98bea560caf0624" exitCode=0 Jan 21 08:10:35 crc kubenswrapper[4893]: I0121 08:10:35.806519 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerDied","Data":"93091eab2e311ee5ab3eaea1e3376e10ccf2f08163ec80edb98bea560caf0624"} Jan 21 08:10:35 crc kubenswrapper[4893]: I0121 08:10:35.806777 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerStarted","Data":"37caa8c2bcc995f994e23aed756a3b9ef9e2c82daba74e2ea789336f78a84632"} Jan 21 08:10:36 crc kubenswrapper[4893]: I0121 08:10:36.827767 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerStarted","Data":"c0a0b6f2d1482c93378cdcfa88ec40e92429e75e679e52d8f30dabbfebeae2bc"} Jan 21 08:10:37 crc kubenswrapper[4893]: I0121 08:10:37.838388 4893 generic.go:334] "Generic (PLEG): container finished" podID="669b74d8-0631-4ed2-8fb4-207a92178348" containerID="c0a0b6f2d1482c93378cdcfa88ec40e92429e75e679e52d8f30dabbfebeae2bc" exitCode=0 Jan 21 08:10:37 crc kubenswrapper[4893]: I0121 08:10:37.838477 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerDied","Data":"c0a0b6f2d1482c93378cdcfa88ec40e92429e75e679e52d8f30dabbfebeae2bc"} Jan 21 08:10:38 crc kubenswrapper[4893]: I0121 08:10:38.580787 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:10:38 crc kubenswrapper[4893]: E0121 08:10:38.581581 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:10:38 crc kubenswrapper[4893]: I0121 08:10:38.848448 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerStarted","Data":"49fbcc0d8fcb834d5bcb1fbe5616107b2db4914b5f287d305ab3c91e745bd9a0"} Jan 21 08:10:38 crc kubenswrapper[4893]: I0121 08:10:38.874763 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhpp2" podStartSLOduration=2.333905364 podStartE2EDuration="4.874742635s" podCreationTimestamp="2026-01-21 08:10:34 +0000 UTC" firstStartedPulling="2026-01-21 08:10:35.808142483 +0000 UTC m=+4577.038488385" lastFinishedPulling="2026-01-21 08:10:38.348979744 +0000 UTC m=+4579.579325656" observedRunningTime="2026-01-21 08:10:38.87059233 +0000 UTC m=+4580.100938232" watchObservedRunningTime="2026-01-21 08:10:38.874742635 +0000 UTC m=+4580.105088527" Jan 21 08:10:44 crc kubenswrapper[4893]: I0121 08:10:44.539025 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:44 crc kubenswrapper[4893]: I0121 08:10:44.539651 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:44 crc kubenswrapper[4893]: I0121 08:10:44.735077 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:44 crc kubenswrapper[4893]: I0121 08:10:44.975207 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:45 crc kubenswrapper[4893]: I0121 08:10:45.043781 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:46 crc kubenswrapper[4893]: I0121 08:10:46.912766 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhpp2" podUID="669b74d8-0631-4ed2-8fb4-207a92178348" containerName="registry-server" containerID="cri-o://49fbcc0d8fcb834d5bcb1fbe5616107b2db4914b5f287d305ab3c91e745bd9a0" gracePeriod=2 Jan 21 08:10:47 crc kubenswrapper[4893]: I0121 08:10:47.925373 4893 generic.go:334] "Generic (PLEG): container finished" podID="669b74d8-0631-4ed2-8fb4-207a92178348" containerID="49fbcc0d8fcb834d5bcb1fbe5616107b2db4914b5f287d305ab3c91e745bd9a0" exitCode=0 Jan 21 08:10:47 crc kubenswrapper[4893]: I0121 08:10:47.925463 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerDied","Data":"49fbcc0d8fcb834d5bcb1fbe5616107b2db4914b5f287d305ab3c91e745bd9a0"} Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.094199 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.266168 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content\") pod \"669b74d8-0631-4ed2-8fb4-207a92178348\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.266261 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities\") pod \"669b74d8-0631-4ed2-8fb4-207a92178348\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.266403 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccd49\" (UniqueName: \"kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49\") pod \"669b74d8-0631-4ed2-8fb4-207a92178348\" (UID: \"669b74d8-0631-4ed2-8fb4-207a92178348\") " Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.268206 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities" (OuterVolumeSpecName: "utilities") pod "669b74d8-0631-4ed2-8fb4-207a92178348" (UID: "669b74d8-0631-4ed2-8fb4-207a92178348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.275279 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49" (OuterVolumeSpecName: "kube-api-access-ccd49") pod "669b74d8-0631-4ed2-8fb4-207a92178348" (UID: "669b74d8-0631-4ed2-8fb4-207a92178348"). InnerVolumeSpecName "kube-api-access-ccd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.354135 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "669b74d8-0631-4ed2-8fb4-207a92178348" (UID: "669b74d8-0631-4ed2-8fb4-207a92178348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.368393 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.368426 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/669b74d8-0631-4ed2-8fb4-207a92178348-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.368441 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccd49\" (UniqueName: \"kubernetes.io/projected/669b74d8-0631-4ed2-8fb4-207a92178348-kube-api-access-ccd49\") on node \"crc\" DevicePath \"\"" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.944122 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhpp2" event={"ID":"669b74d8-0631-4ed2-8fb4-207a92178348","Type":"ContainerDied","Data":"37caa8c2bcc995f994e23aed756a3b9ef9e2c82daba74e2ea789336f78a84632"} Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.944203 4893 scope.go:117] "RemoveContainer" containerID="49fbcc0d8fcb834d5bcb1fbe5616107b2db4914b5f287d305ab3c91e745bd9a0" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.944244 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhpp2" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.976307 4893 scope.go:117] "RemoveContainer" containerID="c0a0b6f2d1482c93378cdcfa88ec40e92429e75e679e52d8f30dabbfebeae2bc" Jan 21 08:10:48 crc kubenswrapper[4893]: I0121 08:10:48.997430 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:49 crc kubenswrapper[4893]: I0121 08:10:49.005995 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhpp2"] Jan 21 08:10:49 crc kubenswrapper[4893]: I0121 08:10:49.015519 4893 scope.go:117] "RemoveContainer" containerID="93091eab2e311ee5ab3eaea1e3376e10ccf2f08163ec80edb98bea560caf0624" Jan 21 08:10:49 crc kubenswrapper[4893]: I0121 08:10:49.587073 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:10:49 crc kubenswrapper[4893]: E0121 08:10:49.587568 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:10:49 crc kubenswrapper[4893]: I0121 08:10:49.593109 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669b74d8-0631-4ed2-8fb4-207a92178348" path="/var/lib/kubelet/pods/669b74d8-0631-4ed2-8fb4-207a92178348/volumes" Jan 21 08:11:01 crc kubenswrapper[4893]: I0121 08:11:01.581145 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:11:01 crc kubenswrapper[4893]: E0121 08:11:01.581796 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:11:13 crc kubenswrapper[4893]: I0121 08:11:13.581111 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:11:13 crc kubenswrapper[4893]: E0121 08:11:13.581904 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a" Jan 21 08:11:27 crc kubenswrapper[4893]: I0121 08:11:27.581820 4893 scope.go:117] "RemoveContainer" containerID="0052d1f7e58f44004abee737ec7caeac44cddf9ae6d519b81384ca77057b0dc9" Jan 21 08:11:27 crc kubenswrapper[4893]: E0121 08:11:27.582772 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hg78p_openshift-machine-config-operator(ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a)\"" pod="openshift-machine-config-operator/machine-config-daemon-hg78p" podUID="ee4586dd-4eb6-4b1c-bf63-d13d856a6e4a"